Moving to a TLP


git-svn-id: https://svn.apache.org/repos/asf/giraph/branches/branch-0.1@1342345 13f79535-47bb-0310-9956-ffa450edef68
diff --git a/CHANGELOG b/CHANGELOG
new file mode 100644
index 0000000..f0e9fb5
--- /dev/null
+++ b/CHANGELOG
@@ -0,0 +1,208 @@
+Giraph Change Log
+
+Release 0.1.0 - 2012-01-31 
+
+  GIRAPH-135: Need DISCLAIMER for incubator. (jghoman) 
+
+  GIRAPH-134: Fix NOTICE file for release. (jghoman)
+
+  GIRAPH-128: RPC port from BasicRPCCommunications should be only a
+  starting port, and retried. (aching)
+
+  GIRAPH-131: Enable creation of test-jars to simplify testing in 
+  downstream projects. (André Kelpe via jghoman)
+  
+  GIRAPH-129: Enable creation of javadoc and sources jars.
+  (André Kelpe via jghoman)
+  
+  GIRAPH-124: Combiner should return Iterable<M> instead of M or 
+  null. (claudio)
+
+  GIRAPH-125: Bug in LongDoubleFloatDoubleVertex.sendMsgToAllEdges().
+  (humming80 via aching)
+
+  GIRAPH-122: Roll version back to 0.1. (jghoman)
+
+  GIRAPH-118: Clarify messages behavior in BasicVertex. (claudio)
+
+  GIRAPH-119: VertexCombiner should work on Iterable<M> instead of 
+  List<M>. (claudio)
+
+  GIRAPH-116: Make EdgeListVertex the default vertex implementation,
+  fix bugs related to EdgeListVertex. (aching)
+
+  GIRAPH-115: Port of the HCC algorithm for identifying all connected
+  components of a graph. (ssc via aching)
+
+  GIRAPH-112: Use elements() properly in LongDoubleFloatDoubleVertex.
+  (aching)
+
+  GIRAPH-114: Inconsistent message map handling in
+  BasicRPCCommunications.LargeMessageFlushExecutor. (ssc via aching)
+
+  GIRAPH-109: GiraphRunner should provide support for combiners.
+  (ssc via claudio)
+
+  GIRAPH-113: Change cast to Vertex used in prepareSuperstep() to
+  BasicVertex. (humming80 via aching)
+
+  GIRAPH-110: Add guide to setup the enviroment for running the
+  unittests in a pseudo-distributed hadoop instance. (ssc via aching)
+
+  GIRAPH-73: A little refactoring. (ssc via aching)
+
+  GIRAPH-106: Change prepareSuperstep() to make
+  setMessages(Iterable<M> messages) package-private. (aching)
+ 
+  GIRAPH-105: BspServiceMaster.checkWorkers() should return empty
+  lists instead of null. (ssc via aching)
+
+  GIRAPH-80: Don't expose the list holding the messages in
+  BasicVertex. (ssc via aching)
+
+  GIRAPH-103: Added properties for commonly used package version to
+  pom.xml. (aching)
+
+  GIRAPH-57: Add new RPC call (putVertexIdMessagesList) to batch
+  putMsgList RPCs together. (aching)
+
+  GIRAPH-104: Save half of maximum memory used from messaging. (aching)
+
+  GIRAPH-10: Aggregators are not exported. (claudio)
+
+  GIRAPH-100: GIRAPH-100 - Data input sampling and testing
+  improvements. (aching)
+
+  GIRAPH-51: Provide unit testing tool for Giraph algorithms.
+  (Sebastian Schelter via jghoman)
+
+  GIRAPH-89: Simplify boolean expressions in BspRecordReader.
+  (shaunak via claudio)
+
+  GIRAPH-90: LongDoubleFloatDoubleVertex has possibily the iterator() 
+  implementation broken (claudio)
+
+  GIRAPH-99: Make AdjacencyListVertexReader and its constructor public.
+  (Kohei Ozaki via jghoman)
+
+  GIRAPH-98: Add Claudio Martella to site. (claudio)
+
+  GIRAPH-97: TestIdWithValueTextOutputFormat.java and 
+  IdWithValueTextOutputFormat.java missing license header (claudio)
+
+  GIRAPH-92: Need outputformat for just vertex ID and value. (jghoman)
+
+  GIRAPH-86: Simplify boolean expressions in ZooKeeperExt::createExt.
+  (attilacsordas via jghoman)
+
+  GIRAPH-91: Large-memory improvements (Memory reduced vertex
+  implementation, fast failure, added settings). (aching)
+
+  GIRAPH-89: Remove debugging system.out from LongDoubleFloatDoubleVertex. 
+  (shaunak via aching)
+
+  GIRAPH-88: Message count not updated properly after GIRAPH-11. (aching)
+
+  GIRAPH-70: Misspellings in PseudoRandomVertexInputFormat configuration
+  parameters. (attilacsordas via jghoman)
+
+  GIRAPH-58: Update site with Arun's id (asuresh)
+  
+  GIRAPH-11: Improve the graph distribution of Giraph. (aching)
+  
+  GIRAPH-64: Create VertexRunner to make it easier to run users'
+  computations. (jghoman)
+ 
+  GIRAPH-79: Change the menu layout of the site. (hyunsik via jghoman)
+
+  GIRAPH-75: Create sections on how to get involved and how 
+  to generate patches on website. (jghoman)
+
+  GIRAPH-63: Typo in PageRankBenchmark. (shaunak via jghoman)
+
+  GIRAPH-47: Export Worker's Context/State to vertices through
+  pre/post/Application/Superstep. (cmartella via aching)
+
+  GIRAPH-71: SequenceFileVertexInputFormat missing license header; 
+  rat fails. (jghoman)
+
+  GIRAPH-36: Ensure that subclassing BasicVertex is possible by user
+  apps. (jmannix via aching)
+
+  GIRAPH-50: Require Maven 3 in order to work with munging plugin.
+  (jghoman)
+  
+  GIRAPH-67: Provide AdjacencyList InputFormat for Ids of Strings and
+  double values. (jghoman)
+
+  GIRAPH-56: Create a CSV TextOutputFormat. (jghoman)
+
+  GIRAPH-66: Add presentations section to website. (jghoman)
+
+  GIRAPH-62: Provide input format for reading graphs stored as adjacency 
+  lists. (jghoman)
+
+  GIRAPH-59: Missing some test if debug enabled before LOG.debug() and
+  LOG.info(). (guzhiwei via aching)
+
+  GIRAPH-48: numFlushThreads is 0 when doing a single worker 
+  unittest. Changing the minimum to 1. (aching)
+
+  GIRAPH-44: Add documentation about counter limits in Hadoop 0.203+.
+  (mtiwari via jghoman)
+
+  GIRAPH-12: Investigate communication improvements. (hyunsik)
+
+  GIRAPH-46: Race condition on superstep 1 with RPC servers not
+  started by the time that requests are sent. (aching)
+
+  GIRAPH-21: Revise CODE_CONVENTIONS. (aching via jghoman)
+
+  GIRAPH-39: mvn rat doesn't like .git or .idea. (jghoman)
+
+  GIRAPH-32: Implement benchmarks to evaluate the performance of message 
+  passing. (hyunsik)
+
+  GIRAPH-34: Failure of Vertex reflection for putVertexList from
+  GIRAPH-27. (aching)
+
+  GIRAPH-35: Modifying the site to indicate that Jake Mannix and
+  Dmitriy Ryaboy are now Giraph committers. (aching)
+
+  GIRAPH-33: Missing license header of GraphState.java (Claudio
+  Martella via hyunsik)
+
+  GIRAPH-31: Hide the SortedMap<I, Edge<I,E>> in Vertex from client
+  visibility (impl. detail), replace with appropriate accessor
+  methods. (jake.mannix via aching)
+
+  GIRAPH-30: NPE in ZooKeeperManager if base directory cannot be
+  created. apurtell via aching.
+
+  GIRAPH-27: Mutable static global state in Vertex.java should be
+  refactored. jake.mannix via aching.
+ 
+  GIRAPH-25: NPE in BspServiceMaster when failing a job. dvryaboy via
+  aching.
+
+  GIRAPH-24: Job-level statistics reports one superstep greater than 
+  workers. (jghoman)
+  
+  GIRAPH-18: Refactor BspServiceWorker::loadVertices(). (jghoman)
+  
+  GIRAPH-14: Support for the Facebook Hadoop branch. (aching)
+
+  GIRAPH-16: Add Apache RAT to the verify build step. (omalley)
+
+  GIRAPH-17: Giraph doesn't give up properly after the maximum connect
+  attempts to ZooKeeper. (aching)
+
+  GIRAPH-2: Make the project homepage. (jghoman)
+
+  GIRAPH-9: Change Yahoo License Header to Apache License Header (hyunsik)
+
+  GIRAPH-6: Remove Yahoo-specific code from pom.xml. (jghoman)
+
+  GIRAPH-5: Remove Yahoo directories after svn import from Yahoo! (aching)
+
+  GIRAPH-3: Vertex:sentMsgToAllEdges should be sendMsg. (jghoman)
diff --git a/CODE_CONVENTIONS b/CODE_CONVENTIONS
new file mode 100644
index 0000000..9828a7c
--- /dev/null
+++ b/CODE_CONVENTIONS
@@ -0,0 +1,96 @@
+This codebase follows the Oracle "Code Conventions for the Java
+Programming Language".  See the following link:
+
+http://www.oracle.com/technetwork/java/codeconvtoc-136057.html
+
+In addition, this project has several rules that are more specific:
+
+- No line should use more than 79 characters
+- No tabs, only spaces
+- All indents should be 2 spaces (or 4 when there is confusion)
+
+if (<short expression>) {
+  return true;
+}
+
+if (<very, very, very long expression that continues and wraps around this
+    line, use 4 spaces on this following line>) {
+  return true;
+}
+
+- Given there are many generic types, there will be long class definitions.
+  Wrap the line as follows:
+
+public class BspServiceMaster<I extends WritableComparable, V extends Writable,
+    E extends Writable, M extends Writable> extends BspService<I, V, E, M>
+    implements CentralizedServiceMaster<I, V, E, M> {
+  /** Class logger */
+  private static final Logger LOG = Logger.getLogger(BspServiceMaster.class);
+}
+
+- All while/if/else must have brackets, even if there there is only a one
+  line statement following.  'else' and 'else if' are expected to line up
+  with the '}'.  For example:
+
+if (condition) {
+  statement;
+}
+
+if (condition) {
+  statement;
+} else {
+  statement;
+}
+
+- Any use of LOG should be enclosed with an is*Enabled() method.  For example:
+
+if (LOG.isInfoEnabled()) {
+  LOG.info("something happened");
+}
+
+- All classes, members, and member methods should have Javadoc in the following
+  style.  C-style comments for javadoc and // comments for non-javadoc.  Also,
+  the comment block should have a line break that separates the comment
+  section and the @ section.  See below.
+
+/**
+ * This is an example class
+ */
+public class Giraffe {
+  /** Number of spots on my giraffe */
+  private int spots;
+  /**
+   * Example horribly long comment that wraps around the line.  If it is very,
+   * very, very long.
+   */
+  private int feet;
+
+  /**
+   * How many seconds to travel a number of steps
+   *
+   * @param steps Steps to travel
+   * @param stepsPerSec Steps a giraffe travels every second
+   * @return Number of seconds
+   */
+  public static int secToTravel(int steps, int stepsPerSec) {
+    // Simple formula to get time to travel
+    return steps / stepsPerSec;
+  }
+}
+
+- When using synchronized statements, there should not be a space between
+  'synchronized' and '('.  For example:
+
+public foo() {
+  synchronized(bar) {
+  }
+}
+
+- Class members should not begin with 'm_' or '_'
+- No warnings allowed, but be as specific as possible with warning suppression
+- Prefer to avoid abbreviations when reasonable (i.e. 'msg' vs 'message')
+- Static variable names should be entirely capitalized and seperated by '_'
+  (i.e. private static int FOO_BAR_BAR = 2)
+- Non-static variable and method names should not begin capitalized and should only use
+  alphanumeric characters (i.e. int fooBarBar)
+- All classnames begin capitalized then use lower casing (i.e. class FooBarBar)
\ No newline at end of file
diff --git a/DISCLAIMER b/DISCLAIMER
new file mode 100644
index 0000000..f77900c
--- /dev/null
+++ b/DISCLAIMER
@@ -0,0 +1,15 @@
+Apache Giraph is an effort undergoing incubation at the Apache Software
+Foundation (ASF), sponsored by the Apache Incubator PMC.
+
+Incubation is required of all newly accepted projects until a further review
+indicates that the infrastructure, communications, and decision making process
+have stabilized in a manner consistent with other successful ASF projects.
+
+While incubation status is not necessarily a reflection of the completeness
+or stability of the code, it does indicate that the project has yet to be
+fully endorsed by the ASF.
+
+For more information about the incubation status of the Giraph project you
+can go to the following page:
+
+http://incubator.apache.org/giraph/
\ No newline at end of file
diff --git a/LICENSE.txt b/LICENSE.txt
new file mode 100644
index 0000000..d645695
--- /dev/null
+++ b/LICENSE.txt
@@ -0,0 +1,202 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
diff --git a/NOTICE b/NOTICE
new file mode 100644
index 0000000..3c73f8f
--- /dev/null
+++ b/NOTICE
@@ -0,0 +1,5 @@
+Apache Giraph
+Copyright 2012 The Apache Software Foundation.
+
+This product includes software developed at
+The Apache Software Foundation (http://www.apache.org/).
diff --git a/README b/README
new file mode 100644
index 0000000..c6bf79e
--- /dev/null
+++ b/README
@@ -0,0 +1,132 @@
+Giraph : Large-scale graph processing on Hadoop
+
+Web and online social graphs have been rapidly growing in size and
+scale during the past decade.  In 2008, Google estimated that the
+number of web pages reached over a trillion.  Online social networking
+and email sites, including Yahoo!, Google, Microsoft, Facebook,
+LinkedIn, and Twitter, have hundreds of millions of users and are
+expected to grow much more in the future.  Processing these graphs
+plays a big role in relevant and personalized information for users,
+such as results from a search engine or news in an online social
+networking site.
+
+Graph processing platforms to run large-scale algorithms (such as page
+rank, shared connections, personalization-based popularity, etc.) have
+become quite popular.  Some recent examples include Pregel and HaLoop.
+For general-purpose big data computation, the map-reduce computing
+model has been well adopted and the most deployed map-reduce
+infrastructure is Apache Hadoop.  We have implemented a
+graph-processing framework that is launched as a typical Hadoop job to
+leverage existing Hadoop infrastructure, such as Amazon’s EC2.  Giraph
+builds upon the graph-oriented nature of Pregel but additionally adds
+fault-tolerance to the coordinator process with the use of ZooKeeper
+as its centralized coordination service.
+
+Giraph follows the bulk-synchronous parallel model relative to graphs
+where vertices can send messages to other vertices during a given
+superstep.  Checkpoints are initiated by the Giraph infrastructure at
+user-defined intervals and are used for automatic application restarts
+when any worker in the application fails.  Any worker in the
+application can act as the application coordinator and one will
+automatically take over if the current application coordinator fails.
+
+-------------------------------
+
+Hadoop versions for use with Giraph:
+
+Secure Hadoop versions:
+- Apache Hadoop 0.20.203, 0.20.204, other secure versions may work as well
+-- Other versions reported working include:
+---  Cloudera CDH3u0, CDH3u1
+
+Unsecure Hadoop versions:
+- Apache Hadoop 0.20.1, 0.20.2, 0.20.3
+
+Facebook Hadoop release (https://github.com/facebook/hadoop-20-warehouse):
+- GitHub master
+
+While we provide support for the unsecure and Facebook versions of Hadoop
+with the maven profiles 'hadoop_non_secure' and 'hadoop_facebook',
+respectively, we have been primarily focusing on secure Hadoop releases
+at this time.
+
+-------------------------------
+
+Building and testing:
+
+You will need the following:
+- Java 1.6
+- Maven 3 or higher. Giraph uses the munge plugin 
+  (http://sonatype.github.com/munge-maven-plugin/),
+  which requires Maven 3, to support multiple versions of Hadoop. Also, the
+  web site plugin requires Maven 3.
+
+Use the maven commands with secure Hadoop to:
+- compile (i.e mvn compile)
+- package (i.e. mvn package)
+- test (i.e. mvn test)
+
+For the non-secure versions of Hadoop, run the maven commands with the
+additional argument '-Dhadoop=non_secure' to enable the maven profile
+'hadoop_non_secure'.  An example compilation command is
+'mvn -Dhadoop=non_secure compile'.
+
+For the Facebook Hadoop release, run the maven commands with the
+additional arguments '-Dhadoop=facebook' to enable the maven profile
+'hadoop_facebook' as well as a location for the hadoop core jar file.  An
+example compilation command is 'mvn -Dhadoop=facebook
+-Dhadoop.jar.path=/tmp/hadoop-0.20.1-core.jar compile'.
+
+
+How to run the unittests on a local pseudo-distributed Hadoop instance:
+
+As mentioned earlier, Giraph supports several versions of Hadoop.  In
+this section, we describe how to run the Giraph unittests against a single
+node instance of Apache Hadoop 0.20.203.
+
+Download Apache Hadoop 0.20.203 (hadoop-0.20.203.0/hadoop-0.20.203.0rc1.tar.gz)
+from a mirror picked at http://www.apache.org/dyn/closer.cgi/hadoop/common/
+and unpack it into a local directory
+
+Follow the guide at 
+http://hadoop.apache.org/common/docs/r0.20.2/quickstart.html#PseudoDistributed
+to setup a pseudo-distributed single node Hadoop cluster.
+
+Giraph’s code assumes that you can run at least 4 mappers at once, 
+unfortunately the default configuration allows only 2. Therefore you need 
+to update conf/mapred-site.xml:
+
+<property>
+  <name>mapred.tasktracker.map.tasks.maximum</name>
+  <value>4</value>
+</property>
+
+<property>
+  <name>mapred.map.tasks</name>
+  <value>4</value>
+</property>
+
+After preparing the local filesystem with:
+
+rm -rf /tmp/hadoop-<username>
+/path/to/hadoop/bin/hadoop namenode -format
+
+you can start the local hadoop instance:
+
+/path/to/hadoop/bin/start-all.sh
+
+and finally run Giraph’s unittests:
+
+mvn clean test -Dprop.mapred.job.tracker=localhost:9001
+
+Now you can open a browser, point it to http://localhost:50030 and watch the
+Giraph jobs from the unittests running on your local Hadoop instance!
+
+
+Notes: 
+Counter limit: In Hadoop 0.20.203.0 onwards, there is a limit on the number of
+counters one can use, which is set to 120 by default. This limit restricts the
+number of iterations/supersteps possible in Giraph. This limit can be increased
+by setting a parameter "mapreduce.job.counters.limit" in job tracker's config
+file mapred-site.xml.
+
diff --git a/bin/giraph b/bin/giraph
new file mode 100755
index 0000000..734eefd
--- /dev/null
+++ b/bin/giraph
@@ -0,0 +1,87 @@
+#!/bin/bash
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# resolve links - $0 may be a softlink
+THIS="$0"
+while [ -h "$THIS" ]; do
+  ls=`ls -ld "$THIS"`
+  link=`expr "$ls" : '.*-> \(.*\)$'`
+  if expr "$link" : '.*/.*' > /dev/null; then
+    THIS="$link"
+  else
+    THIS=`dirname "$THIS"`/"$link"
+  fi
+done
+
+# some directories
+THIS_DIR=`dirname "$THIS"`
+GIRAPH_HOME=`cd "$THIS_DIR/.." ; pwd`
+
+# extra properites to send straight to Hadoop
+HADOOP_PROPERTIES=
+while [ $1 ] && [ ${1:0:2} == "-D" ] ; do
+    HADOOP_PROPERTIES="$1 $HADOOP_PROPERTIES"
+    shift
+done
+
+USER_JAR=$1
+shift
+
+if [ ! -e "$USER_JAR" ]; then
+  echo "Can't find user jar to execute."
+  exit 1
+fi
+
+# add user jar to classpath
+CLASSPATH=${USER_JAR}
+
+# so that filenames w/ spaces are handled correctly in loops below
+IFS=
+
+# add release dependencies to CLASSPATH
+for f in $GIRAPH_HOME/lib/*.jar; do
+  CLASSPATH=${CLASSPATH}:$f;
+done
+
+CLASS=org.apache.giraph.GiraphRunner
+
+for f in $GIRAPH_HOME/lib/giraph*.jar ; do
+  if [ -e "$f" ]; then
+    JAR=$f
+  fi
+done
+
+# restore ordinary behaviour
+unset IFS
+
+if [ "$JAR" = "" ] ; then
+  echo "Can't find Giraph jar."
+  exit 1
+fi
+
+if [ "$HADOOP_CONF_DIR" = "" ] ; then
+  HADOOP_CONF_DIR=$HADOOP_HOME/conf
+  echo "No HADOOP_CONF_DIR set, using $HADOOP_HOME/conf "
+else
+  echo "HADOOP_CONF_DIR=$HADOOP_CONF_DIR"
+fi
+
+# Giraph's jars to add to distributed cache via -libjar, which are csv rather than :sv
+GIRAPH_JARS=`echo ${JAR}:${CLASSPATH}|sed s/:/,/g`
+export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$CLASSPATH
+
+exec "$HADOOP_HOME/bin/hadoop" --config $HADOOP_CONF_DIR jar $JAR $CLASS $HADOOP_PROPERTIES -libjars $GIRAPH_JARS  "$@"
\ No newline at end of file
diff --git a/pom.xml b/pom.xml
new file mode 100644
index 0000000..bcf0726
--- /dev/null
+++ b/pom.xml
@@ -0,0 +1,591 @@
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
+  <modelVersion>4.0.0</modelVersion>
+
+  <groupId>org.apache.giraph</groupId>
+  <artifactId>giraph</artifactId>
+  <packaging>jar</packaging>
+  <version>0.1</version>
+
+  <name>Apache Incubator Giraph</name>
+  <url>http://incubator.apache.org/giraph/</url>
+  <description>Giraph : Large-scale graph processing on Hadoop</description>
+  <inceptionYear>2011</inceptionYear>
+
+  <scm>
+    <connection>scm:svn:http://svn.apache.org/repos/asf/incubator/giraph</connection>
+    <developerConnection>scm:svn:https://svn.apache.org/repos/asf/incubator/giraph/trunk</developerConnection>
+    <url>https://svn.apache.org/repos/asf/incubator/giraph/</url>
+  </scm>
+
+  <issueManagement>
+    <system>JIRA</system>
+    <url>http://issues.apache.org/jira/browse/GIRAPH</url>
+  </issueManagement>
+
+  <licenses>
+    <license>
+      <name>Apache 2</name>
+      <url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
+      <distribution>repo</distribution>
+      <comments>A business-friendly OSS license</comments>
+    </license>
+  </licenses>
+
+  <organization>
+    <name>The Apache Software Foundation</name>
+    <url>http://www.apache.org</url>
+  </organization>
+
+  <mailingLists>
+    <mailingList>
+      <name>User List</name>
+      <subscribe>giraph-user-subscribe@incubator.apache.org</subscribe>
+      <unsubscribe>giraph-user-unsubscribe@incubator.apache.org</unsubscribe>
+      <post>giraph-user@incubator.apache.org</post>
+      <archive>http://mail-archives.apache.org/mod_mbox/incubator-giraph-user/</archive>
+    </mailingList>
+    <mailingList>
+      <name>Developer List</name>
+      <subscribe>giraph-dev-subscribe@incubator.apache.org</subscribe>
+      <unsubscribe>giraph-dev-unsubscribe@incubator.apache.org</unsubscribe>
+      <post>giraph-dev@incubator.apache.org</post>
+      <archive>http://mail-archives.apache.org/mod_mbox/incubator-giraph-dev/</archive>
+    </mailingList>
+    <mailingList>
+      <name>Commits List</name>
+      <subscribe>giraph-commits-subscribe@incubator.apache.org</subscribe>
+      <unsubscribe>giraph-commits-unsubscribe@incubator.apache.org</unsubscribe>
+      <post>giraph-commits@incubator.apache.org</post>
+      <archive>http://mail-archives.apache.org/mod_mbox/incubator-giraph-commits/</archive>
+    </mailingList>
+  </mailingLists>
+
+  <developers>
+    <developer>
+      <id>aching</id>
+      <name>Avery Ching</name>
+      <email>aching@apache.org</email>
+      <timezone>-8</timezone>
+      <organization>Facebook</organization>
+      <organizationUrl>http://www.facebook.com</organizationUrl>
+    </developer>
+    <developer>
+      <id>hyunsik</id>
+      <name>Hyunsik Choi</name>
+      <email>hyunsik@apache.org</email>
+      <timezone>+9</timezone>
+      <organization>Database Lab, Korea University </organization>
+    </developer>
+    <developer>
+      <id>jghoman</id>
+      <name>Jakob Homan</name>
+      <email>jghoman@apache.org</email>
+      <timezone>-8</timezone>
+      <organization>LinkedIn</organization>
+      <organizationUrl>http://www.linkedin.com</organizationUrl>
+    </developer>
+    <developer>
+      <id>kunzchr</id>
+      <name>Christian Kunz</name>
+      <email>christian@jybe-inc.com</email>
+      <timezone>-8</timezone>
+      <organization>Jybe</organization>
+      <organizationUrl>http://jy.be</organizationUrl>
+    </developer>
+    <developer>
+      <id>omalley</id>
+      <name>Owen O'Malley</name>
+      <email>owen@hortonworks.com</email>
+      <timezone>-8</timezone>
+      <organization>HortonWorks</organization>
+      <organizationUrl>http://www.hortonworks.com</organizationUrl>
+    </developer>
+    <developer>
+      <id>prhodes</id>
+      <name>Phillip Rhodes</name>
+      <email>phrodes@apache.org</email>
+      <timezone>-5</timezone>
+      <organization>Fogbeam Labs</organization>
+      <organizationUrl>http://www.fogbeam.com/</organizationUrl>
+    </developer>
+    <developer>
+      <id>asuresh</id>
+      <name>Arun Suresh</name>
+      <email>asuresh@apache.org</email>
+      <timezone>-8</timezone>
+      <organization>Informatica</organization>
+      <organizationUrl>http://www.informatica.com/</organizationUrl>
+    </developer>
+    <developer>
+      <id>jake.mannix</id>
+      <name>Jake Mannix</name>
+      <email>jmannix@apache.org</email>
+      <timezone>-8</timezone>
+      <organization>Twitter</organization>
+      <organizationUrl>http://www.twitter.com/</organizationUrl>
+    </developer>
+    <developer>
+      <id>dvryaboy</id>
+      <name>Dmitriy Ryaboy</name>
+      <email>dvryaboy@gmail.com</email>
+      <timezone>-8</timezone>
+      <organization>Twitter</organization>
+      <organizationUrl>http://www.twitter.com/</organizationUrl>
+    </developer>
+    <developer>
+      <id>claudio</id>
+      <name>Claudio Martella</name>
+      <email>claudio@apache.org</email>
+      <timezone>+1</timezone>
+      <organization>LSDS group, VU Amsterdam</organization>
+    </developer>
+  </developers>
+
+  <properties>
+    <compileSource>1.6</compileSource>
+    <hadoop.version>0.20.203.0</hadoop.version>
+    <maven-compiler-plugin.version>2.3.2</maven-compiler-plugin.version>
+    <maven-javadoc-plugin.version>2.6</maven-javadoc-plugin.version>
+    <jackson.version>1.8.0</jackson.version>
+    <export-target.dir>export/target</export-target.dir>
+    <buildtype>test</buildtype>
+    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
+    <munge-maven-plugin.version>1.0</munge-maven-plugin.version>
+  </properties>
+
+  <build>
+    <plugins>
+
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-enforcer-plugin</artifactId>
+        <version>1.0.1</version>
+        <executions>
+          <execution>
+            <id>enforce-maven</id>
+            <goals>
+              <goal>enforce</goal>
+            </goals>
+            <configuration>
+              <rules>
+                <requireMavenVersion>
+                  <version>3.0.0</version>
+                </requireMavenVersion>
+              </rules>
+            </configuration>
+          </execution>
+        </executions>
+      </plugin>
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-assembly-plugin</artifactId>
+        <version>2.2</version>
+        <executions>
+          <execution>
+            <id>build-fat-jar</id>
+            <!-- this is used for inheritance merges -->
+            <phase>compile</phase>
+            <!-- append to the packaging phase. -->
+            <configuration>
+              <descriptorRefs>
+                <descriptorRef>jar-with-dependencies</descriptorRef>
+              </descriptorRefs>
+            </configuration>
+            <goals>
+              <goal>single</goal>
+              <!-- goals == mojos -->
+            </goals>
+          </execution>
+          <execution>
+            <id>make-assembly</id>
+            <!-- this is used for inheritance merges -->
+            <phase>package</phase>
+            <!-- append to the packaging phase. -->
+            <configuration>
+              <!-- Specifies the configuration file of the assembly plugin -->
+              <descriptors>
+                <descriptor>${basedir}/src/main/assembly/assembly.xml
+                </descriptor>
+              </descriptors>
+              <outputDirectory>target</outputDirectory>
+            </configuration>
+            <goals>
+              <goal>single</goal>
+              <!-- goals == mojos -->
+            </goals>
+          </execution>
+        </executions>
+      </plugin>
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-surefire-plugin</artifactId>
+        <version>2.6</version>
+        <configuration>
+          <systemProperties>
+            <property>
+              <name>prop.jarLocation</name>
+              <value>target/giraph-${project.version}-jar-with-dependencies.jar</value>
+            </property>
+          </systemProperties>
+        </configuration>
+      </plugin>
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-compiler-plugin</artifactId>
+        <version>${maven-compiler-plugin.version}</version>
+        <configuration>
+          <source>${compileSource}</source>
+          <target>${compileSource}</target>
+        </configuration>
+      </plugin>
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-javadoc-plugin</artifactId>
+        <version>${maven-javadoc-plugin.version}</version>
+        <executions>
+          <execution>
+            <id>attach-javadocs</id>
+            <goals>
+              <goal>jar</goal>
+            </goals>
+          </execution>
+        </executions>
+      </plugin>
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-source-plugin</artifactId>
+        <version>2.1.2</version>
+        <executions>
+          <execution>
+            <id>attach-sources</id>
+            <goals>
+              <goal>jar</goal>
+            </goals>
+          </execution>
+        </executions>
+      </plugin>
+      <plugin>
+          <groupId>org.apache.maven.plugins</groupId>
+          <artifactId>maven-jar-plugin</artifactId>
+          <version>2.3.2</version>
+          <executions>
+              <execution>
+                  <goals>
+                      <goal>test-jar</goal>
+                  </goals>
+              </execution>
+          </executions>
+      </plugin>
+      <plugin>
+        <groupId>org.codehaus.mojo</groupId>
+        <artifactId>findbugs-maven-plugin</artifactId>
+        <version>2.3.2</version>
+      </plugin>
+    <plugin>
+      <groupId>org.apache.maven.plugins</groupId>
+      <artifactId>maven-site-plugin</artifactId>
+      <version>3.0</version>
+      <configuration>
+        <reportPlugins>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-project-info-reports-plugin</artifactId>
+            <version>2.2</version>
+            <reports>
+              <report>index</report>
+              <report>project-team</report>
+              <report>license</report>
+              <report>mailing-list</report>
+              <report>dependencies</report>
+              <report>dependency-convergence</report>
+              <report>plugin-management</report>
+              <report>cim</report>
+              <report>issue-tracking</report>
+              <report>scm</report>
+              <report>summary</report>
+            </reports>
+          </plugin>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-surefire-report-plugin</artifactId>
+            <version>2.6</version>
+          </plugin>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-javadoc-plugin</artifactId>
+            <version>2.7</version>
+          </plugin>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-checkstyle-plugin</artifactId>
+            <version>2.6</version>
+          </plugin>
+          <plugin>
+            <groupId>org.codehaus.mojo</groupId>
+            <artifactId>jdepend-maven-plugin</artifactId>
+            <version>2.0-beta-2</version>
+          </plugin>
+          <plugin>
+            <groupId>org.codehaus.mojo</groupId>
+            <artifactId>cobertura-maven-plugin</artifactId>
+            <version>2.4</version>
+          </plugin>
+          <plugin>
+            <groupId>org.codehaus.mojo</groupId>
+            <artifactId>taglist-maven-plugin</artifactId>
+            <version>2.4</version>
+          </plugin>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-jxr-plugin</artifactId>
+            <version>2.1</version>
+          </plugin>
+       </reportPlugins>
+      </configuration>
+    </plugin>
+    <plugin>
+      <groupId>org.apache.rat</groupId>
+      <artifactId>apache-rat-plugin</artifactId>
+      <version>0.7</version>
+      <executions>
+        <execution>
+          <phase>verify</phase>
+          <goals>
+            <goal>check</goal>
+          </goals>
+        </execution>
+      </executions>
+      <configuration>
+         <excludeSubProjects>false</excludeSubProjects>
+         <numUnapprovedLicenses>0</numUnapprovedLicenses>
+         <excludes>
+            <exclude>CODE_CONVENTIONS</exclude>
+            <!-- generated content -->
+            <exclude>**/target/**</exclude>
+            <exclude>_bsp/**</exclude>
+            <!-- source control and IDEs -->
+            <exclude>.git/**</exclude>
+            <exclude>.idea/**</exclude>
+         </excludes>
+      </configuration>
+    </plugin>
+    </plugins>
+  </build>
+
+  <profiles>
+    <profile>
+      <id>hadoop_non_secure</id>
+       <activation>
+        <property>
+          <name>hadoop</name>
+          <value>non_secure</value>
+        </property>
+      </activation>
+      <properties>
+        <hadoop.version>0.20.2</hadoop.version>
+      </properties>
+      <build>
+        <resources>
+          <resource>
+            <directory>src/main/java/org/apache/giraph/hadoop</directory>
+            <excludes>
+              <exclude>BspTokenSelector.java</exclude>
+            </excludes>
+          </resource>
+        </resources>
+        <plugins>
+          <plugin>
+            <groupId>org.sonatype.plugins</groupId>
+            <artifactId>munge-maven-plugin</artifactId>
+            <version>${munge-maven-plugin.version}</version>
+            <executions>
+              <execution>
+                <id>munge</id>
+                <phase>generate-sources</phase>
+                <goals>
+                  <goal>munge</goal>
+                </goals>
+              </execution>
+            </executions>
+            <configuration>
+              <symbols>HADOOP_NON_SECURE</symbols>
+            </configuration>
+          </plugin>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-compiler-plugin</artifactId>
+            <version>${maven-compiler-plugin.version}</version>
+            <configuration>
+              <excludes>
+                <exclude>**/BspTokenSelector.java</exclude>
+              </excludes>
+              <source>${compileSource}</source>
+              <target>${compileSource}</target>
+              <showWarnings>true</showWarnings>
+            </configuration>
+          </plugin>
+        </plugins>
+      </build>
+    </profile>
+    <profile>
+      <id>hadoop_facebook</id>
+       <activation>
+        <property>
+          <name>hadoop</name>
+          <value>facebook</value>
+        </property>
+      </activation>
+      <dependencies>
+        <dependency>
+          <groupId>com.facebook.hadoop</groupId>
+          <artifactId>hadoop-core</artifactId>
+          <version>0.20.1</version>
+          <type>jar</type>
+          <scope>system</scope>
+          <systemPath>${hadoop.jar.path}</systemPath>
+        </dependency>
+      </dependencies>
+      <build>
+        <resources>
+          <resource>
+            <directory>src/main/java/org/apache/giraph/hadoop</directory>
+            <excludes>
+              <exclude>BspTokenSelector.java</exclude>
+            </excludes>
+          </resource>
+        </resources>
+        <plugins>
+          <plugin>
+            <groupId>org.sonatype.plugins</groupId>
+            <artifactId>munge-maven-plugin</artifactId>
+            <version>${munge-maven-plugin.version}</version>
+            <executions>
+              <execution>
+                <id>munge</id>
+                <phase>generate-sources</phase>
+                <goals>
+                  <goal>munge</goal>
+                </goals>
+              </execution>
+            </executions>
+            <configuration>
+              <symbols>HADOOP_FACEBOOK</symbols>
+            </configuration>
+          </plugin>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-compiler-plugin</artifactId>
+            <version>${maven-compiler-plugin.version}</version>
+            <configuration>
+              <excludes>
+                <exclude>**/BspTokenSelector.java</exclude>
+              </excludes>
+              <source>${compileSource}</source>
+              <target>${compileSource}</target>
+            </configuration>
+          </plugin>
+        </plugins>
+      </build>
+    </profile>
+  </profiles>
+
+  <dependencies>
+    <dependency>
+      <groupId>junit</groupId>
+      <artifactId>junit</artifactId>
+      <version>3.8.1</version>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-core</artifactId>
+      <version>${hadoop.version}</version>
+      <scope>provided</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.codehaus.jackson</groupId>
+      <artifactId>jackson-core-asl</artifactId>
+      <version>${jackson.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.mahout</groupId>
+      <artifactId>mahout-collections</artifactId>
+      <version>1.0</version>
+    </dependency>
+    <dependency>
+      <groupId>com.google.guava</groupId>
+      <artifactId>guava</artifactId>
+      <version>r09</version>
+    </dependency>
+    <dependency>
+      <groupId>org.codehaus.jackson</groupId>
+      <artifactId>jackson-mapper-asl</artifactId>
+      <version>${jackson.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.zookeeper</groupId>
+      <artifactId>zookeeper</artifactId>
+      <version>3.3.3</version>
+      <exclusions>
+        <exclusion>
+          <groupId>com.sun.jmx</groupId>
+          <artifactId>jmxri</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>com.sun.jdmk</groupId>
+          <artifactId>jmxtools</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>javax.jms</groupId>
+          <artifactId>jms</artifactId>
+        </exclusion>
+      </exclusions>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.commons</groupId>
+      <artifactId>commons-io</artifactId>
+      <version>1.3.2</version>
+    </dependency>
+    <dependency>
+      <groupId>commons-cli</groupId>
+      <artifactId>commons-cli</artifactId>
+      <version>1.2</version>
+    </dependency>
+    <dependency>
+      <groupId>net.iharder</groupId>
+      <artifactId>base64</artifactId>
+      <version>2.3.8</version>
+    </dependency>
+    <dependency>
+      <groupId>org.json</groupId>
+      <artifactId>json</artifactId>
+      <version>20090211</version>
+    </dependency>
+    <dependency>
+      <groupId>org.mockito</groupId>
+      <artifactId>mockito-all</artifactId>
+      <version>1.8.5</version>
+      <scope>test</scope>
+    </dependency>
+  </dependencies>
+</project>
diff --git a/src/main/assembly/assembly.xml b/src/main/assembly/assembly.xml
new file mode 100644
index 0000000..9f6ff52
--- /dev/null
+++ b/src/main/assembly/assembly.xml
@@ -0,0 +1,86 @@
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+<assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0 http://maven.apache.org/xsd/assembly-1.1.0.xsd">
+  <id>bin</id>
+  <formats>
+    <format>tar.gz</format>
+  </formats>
+  <moduleSets>
+    <moduleSet>
+      <binaries>
+        <includeDependencies>true</includeDependencies>
+        <outputDirectory>lib</outputDirectory>
+        <unpack>false</unpack>
+        <dependencySets>
+          <dependencySet/>
+        </dependencySets>
+      </binaries>
+    </moduleSet>
+  </moduleSets>
+  <fileSets>
+    <fileSet>
+      <directory>${project.build.directory}</directory>
+      <outputDirectory>/lib</outputDirectory>
+      <includes>
+        <include>*.jar</include>
+      </includes>
+      <excludes>
+        <exclude>giraph*jar-with-dependencies.jar</exclude>
+      </excludes>
+    </fileSet>
+
+    <fileSet>
+      <includes>
+        <include>${basedir}/CHANGELOG</include>
+        <include>${basedir}/LICENSE.txt</include>
+        <include>${basedir}/NOTICE</include>
+        <include>${basedir}/README</include>
+        <include>${basedir}/CODE_CONVENTIONS</include>
+      </includes>
+    </fileSet>
+    <fileSet>
+      <includes>
+        <include>pom.xml</include>
+      </includes>
+    </fileSet>
+
+    <fileSet>
+      <directory>src</directory>
+    </fileSet>
+
+    <fileSet>
+      <directory>bin</directory>
+      <outputDirectory>bin</outputDirectory>
+      <fileMode>755</fileMode>
+    </fileSet>
+    <fileSet>
+      <directory>target/site</directory>
+      <outputDirectory>docs</outputDirectory>
+    </fileSet>
+
+  </fileSets>
+  <dependencySets>
+    <dependencySet>
+      <useProjectArtifact>false</useProjectArtifact>
+      <outputDirectory>/lib</outputDirectory>
+      <unpack>false</unpack>
+      <scope>runtime</scope>
+    </dependencySet>
+  </dependencySets>
+</assembly>
diff --git a/src/main/java/org/apache/giraph/GiraphRunner.java b/src/main/java/org/apache/giraph/GiraphRunner.java
new file mode 100644
index 0000000..121f9c1
--- /dev/null
+++ b/src/main/java/org/apache/giraph/GiraphRunner.java
@@ -0,0 +1,134 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.giraph;
+
+import org.apache.commons.cli.BasicParser;
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.Options;
+import org.apache.giraph.graph.GiraphJob;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
+import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.log4j.Logger;
+
+public class GiraphRunner implements Tool {
+  private static final Logger LOG = Logger.getLogger(GiraphRunner.class);
+  private Configuration conf;
+
+  final String [][] requiredOptions =
+      {{"w", "Need to choose the number of workers (-w)"},
+       {"if", "Need to set inputformat (-if)"}};
+
+  private Options getOptions() {
+    Options options = new Options();
+    options.addOption("h", "help", false, "Help");
+    options.addOption("q", "quiet", false, "Quiet output");
+    options.addOption("w", "workers", true, "Number of workers");
+    options.addOption("if", "inputFormat", true, "Graph inputformat");
+    options.addOption("of", "outputFormat", true, "Graph outputformat");
+    options.addOption("ip", "inputPath", true, "Graph input path");
+    options.addOption("op", "outputPath", true, "Graph output path");
+    options.addOption("c", "combiner", true, "VertexCombiner class");
+    options.addOption("wc", "workerContext", true, "WorkerContext class");
+    options.addOption("aw", "aggregatorWriter", true, "AggregatorWriter class");
+    return options;
+  }
+
+  @Override
+  public Configuration getConf() {
+    return conf;
+  }
+
+  @Override
+  public void setConf(Configuration conf) {
+    this.conf = conf;
+  }
+
+  @Override
+  public int run(String[] args) throws Exception {
+    Options options = getOptions();
+    HelpFormatter formatter = new HelpFormatter();
+    if (args.length == 0) {
+      formatter.printHelp(getClass().getName(), options, true);
+      return 0;
+    }
+
+    String vertexClassName = args[0];
+    if(LOG.isDebugEnabled()) {
+      LOG.debug("Attempting to run Vertex: " + vertexClassName);
+    }
+
+    CommandLineParser parser = new BasicParser();
+    CommandLine cmd = parser.parse(options, args);
+
+    // Verify all the options have been provided
+    for (String[] requiredOption : requiredOptions) {
+      if(!cmd.hasOption(requiredOption[0])) {
+        System.out.println(requiredOption[1]);
+        return -1;
+      }
+    }
+
+    int workers = Integer.parseInt(cmd.getOptionValue('w'));
+    GiraphJob job = new GiraphJob(getConf(), "Giraph: " + vertexClassName);
+    job.setVertexClass(Class.forName(vertexClassName));
+    job.setVertexInputFormatClass(Class.forName(cmd.getOptionValue("if")));
+    job.setVertexOutputFormatClass(Class.forName(cmd.getOptionValue("of")));
+
+    if(cmd.hasOption("ip")) {
+      FileInputFormat.addInputPath(job, new Path(cmd.getOptionValue("ip")));
+    } else {
+      LOG.info("No input path specified. Ensure your InputFormat does not " +
+              "require one.");
+    }
+
+    if(cmd.hasOption("op")) {
+      FileOutputFormat.setOutputPath(job, new Path(cmd.getOptionValue("op")));
+    } else {
+      LOG.info("No output path specified. Ensure your OutputFormat does not " +
+              "require one.");
+    }
+
+    if (cmd.hasOption("c")) {
+        job.setVertexCombinerClass(Class.forName(cmd.getOptionValue("c")));
+    }
+
+    if (cmd.hasOption("wc")) {
+        job.setWorkerContextClass(Class.forName(cmd.getOptionValue("wc")));
+    }
+
+    if (cmd.hasOption("aw")) {
+        job.setAggregatorWriterClass(Class.forName(cmd.getOptionValue("aw")));
+    }
+
+    job.setWorkerConfiguration(workers, workers, 100.0f);
+
+    boolean isQuiet = !cmd.hasOption('q');
+
+    return job.run(isQuiet) ? 0 : -1;
+  }
+
+  public static void main(String[] args) throws Exception {
+    System.exit(ToolRunner.run(new GiraphRunner(), args));
+  }
+}
diff --git a/src/main/java/org/apache/giraph/benchmark/PageRankBenchmark.java b/src/main/java/org/apache/giraph/benchmark/PageRankBenchmark.java
new file mode 100644
index 0000000..2f6ea8e
--- /dev/null
+++ b/src/main/java/org/apache/giraph/benchmark/PageRankBenchmark.java
@@ -0,0 +1,203 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.benchmark;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.PosixParser;
+import org.apache.giraph.graph.EdgeListVertex;
+import org.apache.giraph.graph.GiraphJob;
+import org.apache.giraph.graph.HashMapVertex;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+import java.util.Iterator;
+
+/**
+ * Benchmark based on the basic Pregel PageRank implementation.
+ */
+public class PageRankBenchmark implements Tool {
+    /** Configuration from Configurable */
+    private Configuration conf;
+
+    /** How many supersteps to run */
+    public static String SUPERSTEP_COUNT = "PageRankBenchmark.superstepCount";
+
+    public static class PageRankHashMapVertex extends HashMapVertex<
+            LongWritable, DoubleWritable, DoubleWritable, DoubleWritable> {
+        @Override
+        public void compute(Iterator<DoubleWritable> msgIterator) {
+            if (getSuperstep() >= 1) {
+                double sum = 0;
+                while (msgIterator.hasNext()) {
+                    sum += msgIterator.next().get();
+                }
+                DoubleWritable vertexValue =
+                    new DoubleWritable((0.15f / getNumVertices()) + 0.85f *
+                                       sum);
+                setVertexValue(vertexValue);
+            }
+
+            if (getSuperstep() < getConf().getInt(SUPERSTEP_COUNT, -1)) {
+                long edges = getNumOutEdges();
+                sendMsgToAllEdges(
+                    new DoubleWritable(getVertexValue().get() / edges));
+            } else {
+                voteToHalt();
+            }
+        }
+    }
+
+    public static class PageRankEdgeListVertex extends EdgeListVertex<
+            LongWritable, DoubleWritable, DoubleWritable, DoubleWritable> {
+        @Override
+        public void compute(Iterator<DoubleWritable> msgIterator) {
+            if (getSuperstep() >= 1) {
+                double sum = 0;
+                while (msgIterator.hasNext()) {
+                    sum += msgIterator.next().get();
+                }
+                DoubleWritable vertexValue =
+                    new DoubleWritable((0.15f / getNumVertices()) + 0.85f *
+                                       sum);
+                setVertexValue(vertexValue);
+            }
+
+            if (getSuperstep() < getConf().getInt(SUPERSTEP_COUNT, -1)) {
+                long edges = getNumOutEdges();
+                sendMsgToAllEdges(
+                        new DoubleWritable(getVertexValue().get() / edges));
+            } else {
+                voteToHalt();
+            }
+        }
+    }
+
+    @Override
+    public Configuration getConf() {
+        return conf;
+    }
+
+    @Override
+    public void setConf(Configuration conf) {
+        this.conf = conf;
+    }
+
+    @Override
+    public int run(String[] args) throws Exception {
+        Options options = new Options();
+        options.addOption("h", "help", false, "Help");
+        options.addOption("v", "verbose", false, "Verbose");
+        options.addOption("w",
+                          "workers",
+                          true,
+                          "Number of workers");
+        options.addOption("s",
+                          "supersteps",
+                          true,
+                          "Supersteps to execute before finishing");
+        options.addOption("V",
+                          "aggregateVertices",
+                          true,
+                          "Aggregate vertices");
+        options.addOption("e",
+                          "edgesPerVertex",
+                          true,
+                          "Edges per vertex");
+        options.addOption("c",
+                          "vertexClass",
+                          true,
+                          "Vertex class (0 for Vertex, 1 for EdgeListVertex)");
+        HelpFormatter formatter = new HelpFormatter();
+        if (args.length == 0) {
+            formatter.printHelp(getClass().getName(), options, true);
+            return 0;
+        }
+        CommandLineParser parser = new PosixParser();
+        CommandLine cmd = parser.parse(options, args);
+        if (cmd.hasOption('h')) {
+            formatter.printHelp(getClass().getName(), options, true);
+            return 0;
+        }
+        if (!cmd.hasOption('w')) {
+            System.out.println("Need to choose the number of workers (-w)");
+            return -1;
+        }
+        if (!cmd.hasOption('s')) {
+            System.out.println("Need to set the number of supersteps (-s)");
+            return -1;
+        }
+        if (!cmd.hasOption('V')) {
+            System.out.println("Need to set the aggregate vertices (-V)");
+            return -1;
+        }
+        if (!cmd.hasOption('e')) {
+            System.out.println("Need to set the number of edges " +
+                               "per vertex (-e)");
+            return -1;
+        }
+
+        int workers = Integer.parseInt(cmd.getOptionValue('w'));
+        GiraphJob job = new GiraphJob(getConf(), getClass().getName());
+        if (!cmd.hasOption('c') ||
+                (Integer.parseInt(cmd.getOptionValue('c')) == 0)) {
+            System.out.println("Using " +
+                                PageRankHashMapVertex.class.getName());
+            job.setVertexClass(PageRankHashMapVertex.class);
+        } else {
+            System.out.println("Using " +
+                                PageRankEdgeListVertex.class.getName());
+            job.setVertexClass(PageRankEdgeListVertex.class);
+        }
+        job.setVertexInputFormatClass(PseudoRandomVertexInputFormat.class);
+        job.setWorkerConfiguration(workers, workers, 100.0f);
+        job.getConfiguration().setLong(
+            PseudoRandomVertexInputFormat.AGGREGATE_VERTICES,
+            Long.parseLong(cmd.getOptionValue('V')));
+        job.getConfiguration().setLong(
+            PseudoRandomVertexInputFormat.EDGES_PER_VERTEX,
+            Long.parseLong(cmd.getOptionValue('e')));
+        job.getConfiguration().setInt(
+            SUPERSTEP_COUNT,
+            Integer.parseInt(cmd.getOptionValue('s')));
+
+        boolean isVerbose = false;
+        if (cmd.hasOption('v')) {
+            isVerbose = true;
+        }
+        if (cmd.hasOption('s')) {
+            getConf().setInt(SUPERSTEP_COUNT,
+                             Integer.parseInt(cmd.getOptionValue('s')));
+        }
+        if (job.run(isVerbose) == true) {
+            return 0;
+        } else {
+            return -1;
+        }
+    }
+
+    public static void main(String[] args) throws Exception {
+        System.exit(ToolRunner.run(new PageRankBenchmark(), args));
+    }
+}
diff --git a/src/main/java/org/apache/giraph/benchmark/PseudoRandomVertexInputFormat.java b/src/main/java/org/apache/giraph/benchmark/PseudoRandomVertexInputFormat.java
new file mode 100644
index 0000000..970232a
--- /dev/null
+++ b/src/main/java/org/apache/giraph/benchmark/PseudoRandomVertexInputFormat.java
@@ -0,0 +1,190 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.benchmark;
+
+import com.google.common.collect.Maps;
+import org.apache.giraph.bsp.BspInputSplit;
+import org.apache.giraph.graph.BasicVertex;
+import org.apache.giraph.graph.BspUtils;
+import org.apache.giraph.graph.VertexInputFormat;
+import org.apache.giraph.graph.VertexReader;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.log4j.Logger;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.Random;
+
+/**
+ * This VertexInputFormat is meant for large scale testing.  It allows the user
+ * to create an input data source that a variable number of aggregate vertices
+ * and edges per vertex that is repeatable for the exact same parameter
+ * (pseudo-random).
+ */
+public class PseudoRandomVertexInputFormat<M extends Writable> extends
+        VertexInputFormat<LongWritable, DoubleWritable, DoubleWritable, M> {
+    /** Set the number of aggregate vertices */
+    public static final String AGGREGATE_VERTICES =
+        "pseudoRandomVertexReader.aggregateVertices";
+    /** Set the number of edges per vertex (pseudo-random destination) */
+    public static final String EDGES_PER_VERTEX =
+        "pseudoRandomVertexReader.edgesPerVertex";
+
+    @Override
+    public List<InputSplit> getSplits(JobContext context, int numWorkers)
+            throws IOException, InterruptedException {
+        // This is meaningless, the PseudoRandomVertexReader will generate
+        // all the test data
+        List<InputSplit> inputSplitList = new ArrayList<InputSplit>();
+        for (int i = 0; i < numWorkers; ++i) {
+            inputSplitList.add(new BspInputSplit(i, numWorkers));
+        }
+        return inputSplitList;
+    }
+
+    @Override
+    public VertexReader<LongWritable, DoubleWritable, DoubleWritable, M>
+            createVertexReader(InputSplit split, TaskAttemptContext context)
+            throws IOException {
+        return new PseudoRandomVertexReader<M>();
+    }
+
+    /**
+     * Used by {@link PseudoRandomVertexInputFormat} to read
+     * pseudo-randomly generated data
+     */
+    private static class PseudoRandomVertexReader<M extends Writable> implements
+            VertexReader<LongWritable, DoubleWritable, DoubleWritable, M> {
+        /** Logger */
+        private static final Logger LOG =
+            Logger.getLogger(PseudoRandomVertexReader.class);
+        /** Starting vertex id */
+        private long startingVertexId = -1;
+        /** Vertices read so far */
+        private long verticesRead = 0;
+        /** Total vertices to read (on this split alone) */
+        private long totalSplitVertices = -1;
+        /** Aggregate vertices (all input splits) */
+        private long aggregateVertices = -1;
+        /** Edges per vertex */
+        private long edgesPerVertex = -1;
+        /** BspInputSplit (used only for index) */
+        private BspInputSplit bspInputSplit;
+
+        private Configuration configuration;
+
+        public PseudoRandomVertexReader() {
+        }
+
+        @Override
+        public void initialize(InputSplit inputSplit,
+                               TaskAttemptContext context) throws IOException {
+            configuration = context.getConfiguration();
+            aggregateVertices =
+                configuration.getLong(
+                    PseudoRandomVertexInputFormat.AGGREGATE_VERTICES, 0);
+            if (aggregateVertices <= 0) {
+                throw new IllegalArgumentException(
+                    "initialize: " +
+                    PseudoRandomVertexInputFormat.AGGREGATE_VERTICES + " <= 0");
+            }
+            if (inputSplit instanceof BspInputSplit) {
+                bspInputSplit = (BspInputSplit) inputSplit;
+                long extraVertices =
+                    aggregateVertices % bspInputSplit.getNumSplits();
+                totalSplitVertices =
+                    aggregateVertices / bspInputSplit.getNumSplits();
+                if (bspInputSplit.getSplitIndex() < extraVertices) {
+                    ++totalSplitVertices;
+                }
+                startingVertexId = (bspInputSplit.getSplitIndex() *
+                    (aggregateVertices / bspInputSplit.getNumSplits())) +
+                    Math.min(bspInputSplit.getSplitIndex(),
+                             extraVertices);
+            } else {
+                throw new IllegalArgumentException(
+                    "initialize: Got " + inputSplit.getClass() +
+                    " instead of " + BspInputSplit.class);
+            }
+            edgesPerVertex =
+                configuration.getLong(
+                    PseudoRandomVertexInputFormat.EDGES_PER_VERTEX, 0);
+            if (edgesPerVertex <= 0) {
+                throw new IllegalArgumentException(
+                    "initialize: " +
+                    PseudoRandomVertexInputFormat.EDGES_PER_VERTEX + " <= 0");
+            }
+        }
+
+        @Override
+        public boolean nextVertex() throws IOException, InterruptedException {
+            return totalSplitVertices > verticesRead;
+        }
+
+        @Override
+        public BasicVertex<LongWritable, DoubleWritable, DoubleWritable, M> getCurrentVertex()
+                throws IOException, InterruptedException {
+            BasicVertex<LongWritable, DoubleWritable, DoubleWritable, M>
+                vertex = BspUtils.createVertex(configuration);
+            long vertexId = startingVertexId + verticesRead;
+            // Seed on the vertex id to keep the vertex data the same when
+            // on different number of workers, but other parameters are the
+            // same.
+            Random rand = new Random(vertexId);
+            DoubleWritable vertexValue = new DoubleWritable(rand.nextDouble());
+            Map<LongWritable, DoubleWritable> edges = Maps.newHashMap();
+            for (long i = 0; i < edgesPerVertex; ++i) {
+                LongWritable destVertexId = null;
+                do {
+                    destVertexId =
+                        new LongWritable(Math.abs(rand.nextLong()) %
+                                         aggregateVertices);
+                } while (edges.containsKey(destVertexId));
+                edges.put(destVertexId, new DoubleWritable(rand.nextDouble()));
+            }
+            vertex.initialize(
+                new LongWritable(vertexId), vertexValue, edges, null);
+            ++verticesRead;
+            if (LOG.isDebugEnabled()) {
+                LOG.debug("next: Return vertexId=" +
+                          vertex.getVertexId().get() +
+                          ", vertexValue=" + vertex.getVertexValue() +
+                          ", edgeMap=" + vertex.iterator());
+            }
+            return vertex;
+        }
+
+        @Override
+        public void close() throws IOException {
+        }
+
+        @Override
+        public float getProgress() throws IOException {
+            return verticesRead * 100.0f / totalSplitVertices;
+        }
+    }
+}
diff --git a/src/main/java/org/apache/giraph/benchmark/RandomMessageBenchmark.java b/src/main/java/org/apache/giraph/benchmark/RandomMessageBenchmark.java
new file mode 100644
index 0000000..ff6c986
--- /dev/null
+++ b/src/main/java/org/apache/giraph/benchmark/RandomMessageBenchmark.java
@@ -0,0 +1,401 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.benchmark;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.PosixParser;
+import org.apache.giraph.examples.LongSumAggregator;
+import org.apache.giraph.graph.EdgeListVertex;
+import org.apache.giraph.graph.GiraphJob;
+import org.apache.giraph.graph.WorkerContext;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.BytesWritable;
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.log4j.Logger;
+import java.util.Iterator;
+import java.util.Random;
+
+/**
+ * Random Message Benchmark for evaluating the messaging performance.
+ */
+public class RandomMessageBenchmark implements Tool {
+    /** Configuration from Configurable */
+    private Configuration conf;
+
+    /** How many supersteps to run */
+    public static final String SUPERSTEP_COUNT =
+        "RandomMessageBenchmark.superstepCount";
+    /** How many bytes per message */
+    public static final String NUM_BYTES_PER_MESSAGE =
+        "RandomMessageBenchmark.numBytesPerMessage";
+    /** Default bytes per message */
+    public static final int DEFAULT_NUM_BYTES_PER_MESSAGE = 16;
+    /** How many messages per edge */
+    public static final String NUM_MESSAGES_PER_EDGE=
+        "RandomMessageBenchmark.numMessagesPerEdge";
+    /** Default messages per edge */
+    public static final int DEFAULT_NUM_MESSAGES_PER_EDGE = 1;
+    /** All bytes sent during this superstep */
+    public static final String AGG_SUPERSTEP_TOTAL_BYTES =
+        "superstep total bytes sent";
+    /** All bytes sent during this application */
+    public static final String AGG_TOTAL_BYTES = "total bytes sent";
+    /** All messages during this superstep */
+    public static final String AGG_SUPERSTEP_TOTAL_MESSAGES =
+        "superstep total messages";
+    /** All messages during this application */
+    public static final String AGG_TOTAL_MESSAGES = "total messages";
+    /** All millis during this superstep */
+    public static final String AGG_SUPERSTEP_TOTAL_MILLIS =
+        "superstep total millis";
+    /** All millis during this application */
+    public static final String AGG_TOTAL_MILLIS = "total millis";
+    /** Workers for that superstep */
+    public static final String WORKERS = "workers";
+
+    /**
+     * {@link WorkerContext} forRandomMessageBenchmark.
+     */
+    private static class RandomMessageBenchmarkWorkerContext extends
+            WorkerContext {
+        /** Bytes to be sent */
+        private byte[] messageBytes;
+        /** Number of messages sent per edge */
+        private int numMessagesPerEdge = -1;
+        /** Number of supersteps */
+        private int numSupersteps = -1;
+        /** Random generator for random bytes message */
+        private final Random random = new Random(System.currentTimeMillis());
+        /** Start superstep millis */
+        private long startSuperstepMillis = 0;
+        /** Total bytes */
+        private long totalBytes = 0;
+        /** Total messages */
+        private long totalMessages = 0;
+        /** Total millis */
+        private long totalMillis = 0;
+        /** Class logger */
+        private static final Logger LOG =
+            Logger.getLogger(RandomMessageBenchmarkWorkerContext.class);
+
+        @Override
+        public void preApplication()
+                throws InstantiationException, IllegalAccessException {
+            messageBytes =
+                new byte[getContext().getConfiguration().
+                         getInt(NUM_BYTES_PER_MESSAGE,
+                               DEFAULT_NUM_BYTES_PER_MESSAGE)];
+            numMessagesPerEdge =
+                getContext().getConfiguration().
+                    getInt(NUM_MESSAGES_PER_EDGE,
+                           DEFAULT_NUM_MESSAGES_PER_EDGE);
+            numSupersteps = getContext().getConfiguration().
+                                getInt(SUPERSTEP_COUNT, -1);
+            registerAggregator(AGG_SUPERSTEP_TOTAL_BYTES,
+                LongSumAggregator.class);
+            registerAggregator(AGG_SUPERSTEP_TOTAL_MESSAGES,
+                LongSumAggregator.class);
+            registerAggregator(AGG_SUPERSTEP_TOTAL_MILLIS,
+                LongSumAggregator.class);
+            registerAggregator(WORKERS,
+                LongSumAggregator.class);
+        }
+
+        @Override
+        public void preSuperstep() {
+            LongSumAggregator superstepBytesAggregator =
+                (LongSumAggregator) getAggregator(AGG_SUPERSTEP_TOTAL_BYTES);
+            LongSumAggregator superstepMessagesAggregator =
+                (LongSumAggregator) getAggregator(AGG_SUPERSTEP_TOTAL_MESSAGES);
+            LongSumAggregator superstepMillisAggregator =
+                (LongSumAggregator) getAggregator(AGG_SUPERSTEP_TOTAL_MILLIS);
+            LongSumAggregator workersAggregator =
+                (LongSumAggregator) getAggregator(WORKERS);
+
+            // For timing and tracking the supersteps
+            // - superstep 0 starts the time, but cannot display any stats
+            //   since nothing has been aggregated yet
+            // - supersteps > 0 can display the stats
+            if (getSuperstep() == 0) {
+                startSuperstepMillis = System.currentTimeMillis();
+            } else {
+                totalBytes +=
+                        superstepBytesAggregator.getAggregatedValue().get();
+                totalMessages +=
+                        superstepMessagesAggregator.getAggregatedValue().get();
+                totalMillis +=
+                        superstepMillisAggregator.getAggregatedValue().get();
+                double superstepMegabytesPerSecond =
+                        superstepBytesAggregator.getAggregatedValue().get() *
+                        workersAggregator.getAggregatedValue().get() *
+                        1000d / 1024d / 1024d /
+                        superstepMillisAggregator.getAggregatedValue().get();
+                double megabytesPerSecond = totalBytes *
+                        workersAggregator.getAggregatedValue().get() *
+                        1000d / 1024d / 1024d / totalMillis;
+                double superstepMessagesPerSecond =
+                        superstepMessagesAggregator.getAggregatedValue().get() *
+                        workersAggregator.getAggregatedValue().get() * 1000d /
+                        superstepMillisAggregator.getAggregatedValue().get();
+                double messagesPerSecond = totalMessages *
+                        workersAggregator.getAggregatedValue().get() * 1000d /
+                        totalMillis;
+                if (LOG.isInfoEnabled()) {
+                    LOG.info("Outputing statistics for superstep " +
+                             getSuperstep());
+                    LOG.info(AGG_SUPERSTEP_TOTAL_BYTES + " : " +
+                             superstepBytesAggregator.getAggregatedValue());
+                    LOG.info(AGG_TOTAL_BYTES + " : " + totalBytes);
+                    LOG.info(AGG_SUPERSTEP_TOTAL_MESSAGES + " : " +
+                             superstepMessagesAggregator.getAggregatedValue());
+                    LOG.info(AGG_TOTAL_MESSAGES + " : " + totalMessages);
+                    LOG.info(AGG_SUPERSTEP_TOTAL_MILLIS + " : " +
+                             superstepMillisAggregator.getAggregatedValue());
+                    LOG.info(AGG_TOTAL_MILLIS + " : " + totalMillis);
+                    LOG.info(WORKERS + " : " +
+                             workersAggregator.getAggregatedValue());
+                    LOG.info("Superstep megabytes / second = " +
+                             superstepMegabytesPerSecond);
+                    LOG.info("Total megabytes / second = " +
+                             megabytesPerSecond);
+                    LOG.info("Superstep messages / second = " +
+                             superstepMessagesPerSecond);
+                    LOG.info("Total messages / second = " +
+                             messagesPerSecond);
+                    LOG.info("Superstep megabytes / second / worker = " +
+                             superstepMegabytesPerSecond /
+                             workersAggregator.getAggregatedValue().get());
+                    LOG.info("Total megabytes / second / worker = " +
+                             megabytesPerSecond /
+                             workersAggregator.getAggregatedValue().get());
+                    LOG.info("Superstep messages / second / worker = " +
+                             superstepMessagesPerSecond /
+                             workersAggregator.getAggregatedValue().get());
+                    LOG.info("Total messages / second / worker = " +
+                             messagesPerSecond /
+                             workersAggregator.getAggregatedValue().get());
+                }
+            }
+
+            superstepBytesAggregator.setAggregatedValue(
+                new LongWritable(0L));
+            superstepMessagesAggregator.setAggregatedValue(
+                new LongWritable(0L));
+            workersAggregator.setAggregatedValue(
+                new LongWritable(1L));
+            useAggregator(AGG_SUPERSTEP_TOTAL_BYTES);
+            useAggregator(AGG_SUPERSTEP_TOTAL_MILLIS);
+            useAggregator(AGG_SUPERSTEP_TOTAL_MESSAGES);
+            useAggregator(WORKERS);
+        }
+
+        @Override
+        public void postSuperstep() {
+            LongSumAggregator superstepMillisAggregator =
+                (LongSumAggregator) getAggregator(AGG_SUPERSTEP_TOTAL_MILLIS);
+            long endSuperstepMillis = System.currentTimeMillis();
+            long superstepMillis = endSuperstepMillis - startSuperstepMillis;
+            startSuperstepMillis = endSuperstepMillis;
+            superstepMillisAggregator.setAggregatedValue(
+                new LongWritable(superstepMillis));
+        }
+
+        @Override
+        public void postApplication() {}
+
+        public byte[] getMessageBytes() {
+            return messageBytes;
+        }
+
+        public int getNumMessagePerEdge() {
+            return numMessagesPerEdge;
+        }
+
+        public int getNumSupersteps() {
+            return numSupersteps;
+        }
+
+        public void randomizeMessageBytes() {
+            random.nextBytes(messageBytes);
+        }
+    }
+
+    /**
+     * Actual message computation (messaging in this case)
+     */
+    public static class RandomMessageVertex extends EdgeListVertex<
+            LongWritable, DoubleWritable, DoubleWritable, BytesWritable> {
+
+        @Override
+        public void compute(Iterator<BytesWritable> msgIterator) {
+            RandomMessageBenchmarkWorkerContext workerContext =
+                (RandomMessageBenchmarkWorkerContext) getWorkerContext();
+            LongSumAggregator superstepBytesAggregator =
+                (LongSumAggregator) getAggregator(AGG_SUPERSTEP_TOTAL_BYTES);
+            LongSumAggregator superstepMessagesAggregator =
+                (LongSumAggregator) getAggregator(AGG_SUPERSTEP_TOTAL_MESSAGES);
+            if (getSuperstep() < workerContext.getNumSupersteps()) {
+                for (int i = 0; i < workerContext.getNumMessagePerEdge();
+                        i++) {
+                    workerContext.randomizeMessageBytes();
+                    sendMsgToAllEdges(
+                        new BytesWritable(workerContext.getMessageBytes()));
+                    long bytesSent = workerContext.getMessageBytes().length *
+                        getNumOutEdges();
+                    superstepBytesAggregator.aggregate(bytesSent);
+                    superstepMessagesAggregator.aggregate(getNumOutEdges());
+                }
+            } else {
+                voteToHalt();
+            }
+        }
+    }
+
+    @Override
+    public Configuration getConf() {
+        return conf;
+    }
+
+    @Override
+    public void setConf(Configuration conf) {
+        this.conf = conf;
+    }
+
+    @Override
+    public int run(String[] args) throws Exception {
+        Options options = new Options();
+        options.addOption("h", "help", false, "Help");
+        options.addOption("v", "verbose", false, "Verbose");
+        options.addOption("w",
+                "workers",
+                true,
+                "Number of workers");
+        options.addOption("b",
+                "bytes",
+                true,
+                "Message bytes per memssage");
+        options.addOption("n",
+                "number",
+                true,
+                "Number of messages per edge");
+        options.addOption("s",
+                "supersteps",
+                true,
+                "Supersteps to execute before finishing");
+        options.addOption("V",
+                "aggregateVertices",
+                true,
+                "Aggregate vertices");
+        options.addOption("e",
+                "edgesPerVertex",
+                true,
+                "Edges per vertex");
+        options.addOption("f",
+                "flusher",
+                true,
+                "Number of flush threads");
+
+        HelpFormatter formatter = new HelpFormatter();
+        if (args.length == 0) {
+            formatter.printHelp(getClass().getName(), options, true);
+            return 0;
+        }
+        CommandLineParser parser = new PosixParser();
+        CommandLine cmd = parser.parse(options, args);
+        if (cmd.hasOption('h')) {
+            formatter.printHelp(getClass().getName(), options, true);
+            return 0;
+        }
+        if (!cmd.hasOption('w')) {
+            System.out.println("Need to choose the number of workers (-w)");
+            return -1;
+        }
+        if (!cmd.hasOption('s')) {
+            System.out.println("Need to set the number of supersteps (-s)");
+            return -1;
+        }
+        if (!cmd.hasOption('V')) {
+            System.out.println("Need to set the aggregate vertices (-V)");
+            return -1;
+        }
+        if (!cmd.hasOption('e')) {
+            System.out.println("Need to set the number of edges " +
+                               "per vertex (-e)");
+            return -1;
+        }
+        if (!cmd.hasOption('b')) {
+            System.out.println("Need to set the number of message bytes (-b)");
+            return -1;
+        }
+        if (!cmd.hasOption('n')) {
+            System.out.println("Need to set the number of messages per edge (-n)");
+            return -1;
+        }
+        int workers = Integer.parseInt(cmd.getOptionValue('w'));
+        GiraphJob job = new GiraphJob(getConf(), getClass().getName());
+        job.getConfiguration().setInt(GiraphJob.CHECKPOINT_FREQUENCY, 0);
+        job.setVertexClass(RandomMessageVertex.class);
+        job.setVertexInputFormatClass(PseudoRandomVertexInputFormat.class);
+        job.setWorkerContextClass(RandomMessageBenchmarkWorkerContext.class);
+        job.setWorkerConfiguration(workers, workers, 100.0f);
+        job.getConfiguration().setLong(
+            PseudoRandomVertexInputFormat.AGGREGATE_VERTICES,
+            Long.parseLong(cmd.getOptionValue('V')));
+        job.getConfiguration().setLong(
+            PseudoRandomVertexInputFormat.EDGES_PER_VERTEX,
+            Long.parseLong(cmd.getOptionValue('e')));
+        job.getConfiguration().setInt(
+            SUPERSTEP_COUNT,
+            Integer.parseInt(cmd.getOptionValue('s')));
+        job.getConfiguration().setInt(
+            RandomMessageBenchmark.NUM_BYTES_PER_MESSAGE,
+            Integer.parseInt(cmd.getOptionValue('b')));
+        job.getConfiguration().setInt(
+            RandomMessageBenchmark.NUM_MESSAGES_PER_EDGE,
+            Integer.parseInt(cmd.getOptionValue('n')));
+
+        boolean isVerbose = false;
+        if (cmd.hasOption('v')) {
+            isVerbose = true;
+        }
+        if (cmd.hasOption('s')) {
+            getConf().setInt(SUPERSTEP_COUNT,
+                             Integer.parseInt(cmd.getOptionValue('s')));
+        }
+        if (cmd.hasOption('f')) {
+            job.getConfiguration().setInt(GiraphJob.MSG_NUM_FLUSH_THREADS,
+                Integer.parseInt(cmd.getOptionValue('f')));
+        }
+        if (job.run(isVerbose) == true) {
+            return 0;
+        } else {
+            return -1;
+        }
+    }
+
+    public static void main(String[] args) throws Exception {
+        System.exit(ToolRunner.run(new RandomMessageBenchmark(), args));
+    }
+}
diff --git a/src/main/java/org/apache/giraph/bsp/ApplicationState.java b/src/main/java/org/apache/giraph/bsp/ApplicationState.java
new file mode 100644
index 0000000..b9ec287
--- /dev/null
+++ b/src/main/java/org/apache/giraph/bsp/ApplicationState.java
@@ -0,0 +1,29 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.bsp;
+
+/**
+ *  State of the BSP application
+ */
+public enum ApplicationState {
+    UNKNOWN, ///< Shouldn't be seen, just an initial state
+    START_SUPERSTEP, ///< Start from a desired superstep
+    FAILED, ///< Unrecoverable
+    FINISHED ///< Successful completion
+}
diff --git a/src/main/java/org/apache/giraph/bsp/BspInputFormat.java b/src/main/java/org/apache/giraph/bsp/BspInputFormat.java
new file mode 100644
index 0000000..6e80ad9
--- /dev/null
+++ b/src/main/java/org/apache/giraph/bsp/BspInputFormat.java
@@ -0,0 +1,90 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.bsp;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.giraph.graph.GiraphJob;
+import org.apache.hadoop.mapreduce.RecordReader;
+import org.apache.hadoop.mapreduce.InputFormat;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.log4j.Logger;
+
+/**
+ * This InputFormat supports the BSP model by ensuring that the user specifies
+ * how many splits (number of mappers) should be started simultaneously.
+ * The number of splits depends on whether the master and worker processes are
+ * separate.  It is not meant to do any meaningful split of user-data.
+ */
+public class BspInputFormat extends InputFormat<Text, Text> {
+    /** Logger */
+    private static final Logger LOG = Logger.getLogger(BspInputFormat.class);
+
+    /**
+     * Get the correct number of mappers based on the configuration
+     *
+     * @param conf Configuration to determine the number of mappers
+     */
+    public static int getMaxTasks(Configuration conf) {
+        int maxWorkers = conf.getInt(GiraphJob.MAX_WORKERS, 0);
+        boolean splitMasterWorker =
+            conf.getBoolean(GiraphJob.SPLIT_MASTER_WORKER,
+                            GiraphJob.SPLIT_MASTER_WORKER_DEFAULT);
+        int maxTasks = maxWorkers;
+        if (splitMasterWorker) {
+            int zkServers =
+                conf.getInt(GiraphJob.ZOOKEEPER_SERVER_COUNT,
+                            GiraphJob.ZOOKEEPER_SERVER_COUNT_DEFAULT);
+            maxTasks += zkServers;
+        }
+        if (LOG.isDebugEnabled()) {
+            LOG.debug("getMaxTasks: Max workers = " + maxWorkers +
+                      ", split master/worker = " + splitMasterWorker +
+                      ", total max tasks = " + maxTasks);
+        }
+        return maxTasks;
+    }
+
+    public List<InputSplit> getSplits(JobContext context)
+        throws IOException, InterruptedException {
+        Configuration conf = context.getConfiguration();
+        int maxTasks = getMaxTasks(conf);
+        if (maxTasks <= 0) {
+            throw new InterruptedException(
+                "getSplits: Cannot have maxTasks <= 0 - " + maxTasks);
+        }
+        List<InputSplit> inputSplitList = new ArrayList<InputSplit>();
+        for (int i = 0; i < maxTasks; ++i) {
+            inputSplitList.add(new BspInputSplit());
+        }
+        return inputSplitList;
+    }
+
+    public RecordReader<Text, Text>
+        createRecordReader(InputSplit split, TaskAttemptContext context)
+        throws IOException, InterruptedException {
+        return new BspRecordReader();
+    }
+}
diff --git a/src/main/java/org/apache/giraph/bsp/BspInputSplit.java b/src/main/java/org/apache/giraph/bsp/BspInputSplit.java
new file mode 100644
index 0000000..916090e
--- /dev/null
+++ b/src/main/java/org/apache/giraph/bsp/BspInputSplit.java
@@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.bsp;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.mapreduce.InputSplit;
+
+/**
+ * This InputSplit will not give any ordering or location data.
+ * It is used internally by BspInputFormat (which determines
+ * how many tasks to run the application on).  Users should not use this
+ * directly.
+ */
+public class BspInputSplit extends InputSplit implements Writable {
+    /** Number of splits */
+    private int numSplits = -1;
+    /** Split index */
+    private int splitIndex = -1;
+
+    public BspInputSplit() {}
+
+    public BspInputSplit(int splitIndex, int numSplits) {
+        this.splitIndex = splitIndex;
+        this.numSplits = numSplits;
+    }
+
+    @Override
+    public long getLength() throws IOException, InterruptedException {
+        return 0;
+    }
+
+    @Override
+    public String[] getLocations() throws IOException, InterruptedException {
+        return new String[]{};
+    }
+
+    @Override
+    public void readFields(DataInput in) throws IOException {
+        splitIndex = in.readInt();
+        numSplits = in.readInt();
+    }
+
+    @Override
+    public void write(DataOutput out) throws IOException {
+        out.writeInt(splitIndex);
+        out.writeInt(numSplits);
+    }
+
+    public int getSplitIndex() {
+        return splitIndex;
+    }
+
+    public int getNumSplits() {
+        return numSplits;
+    }
+
+    @Override
+    public String toString() {
+        return "'" + getClass().getCanonicalName() +
+            ", index=" + getSplitIndex() + ", num=" + getNumSplits();
+    }
+}
diff --git a/src/main/java/org/apache/giraph/bsp/BspOutputFormat.java b/src/main/java/org/apache/giraph/bsp/BspOutputFormat.java
new file mode 100644
index 0000000..df07373
--- /dev/null
+++ b/src/main/java/org/apache/giraph/bsp/BspOutputFormat.java
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.bsp;
+
+import java.io.IOException;
+
+import org.apache.giraph.graph.BspUtils;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.OutputCommitter;
+import org.apache.hadoop.mapreduce.OutputFormat;
+import org.apache.hadoop.mapreduce.RecordWriter;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.log4j.Logger;
+
+/**
+ * This is for internal use only.  Allows the vertex output format routines
+ * to be called as if a normal Hadoop job.
+ */
+public class BspOutputFormat extends OutputFormat<Text, Text> {
+    /** Class logger */
+    private static Logger LOG = Logger.getLogger(BspOutputFormat.class);
+
+    @Override
+    public void checkOutputSpecs(JobContext context)
+            throws IOException, InterruptedException {
+        if (BspUtils.getVertexOutputFormatClass(context.getConfiguration())
+                == null) {
+            LOG.warn("checkOutputSpecs: ImmutableOutputCommiter" +
+                     " will not check anything");
+            return;
+        }
+        BspUtils.createVertexOutputFormat(context.getConfiguration()).
+            checkOutputSpecs(context);
+    }
+
+    @Override
+    public OutputCommitter getOutputCommitter(TaskAttemptContext context)
+            throws IOException, InterruptedException {
+        if (BspUtils.getVertexOutputFormatClass(context.getConfiguration())
+                == null) {
+            LOG.warn("getOutputCommitter: Returning " +
+                     "ImmutableOutputCommiter (does nothing).");
+            return new ImmutableOutputCommitter();
+        }
+        return BspUtils.createVertexOutputFormat(context.getConfiguration()).
+            getOutputCommitter(context);
+    }
+
+    @Override
+    public RecordWriter<Text, Text> getRecordWriter(TaskAttemptContext context)
+            throws IOException, InterruptedException {
+        return new BspRecordWriter();
+    }
+}
diff --git a/src/main/java/org/apache/giraph/bsp/BspRecordReader.java b/src/main/java/org/apache/giraph/bsp/BspRecordReader.java
new file mode 100644
index 0000000..7f8811c
--- /dev/null
+++ b/src/main/java/org/apache/giraph/bsp/BspRecordReader.java
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.bsp;
+
+import java.io.IOException;
+
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.RecordReader;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.hadoop.io.Text;
+
+/**
+ * Only returns a single key-value pair so that the map() can run.
+ */
+class BspRecordReader extends RecordReader<Text, Text> {
+
+    private static final Text ONLY_KEY = new Text("only key");
+    private static final Text ONLY_VALUE = new Text("only value");
+
+    /** Has the one record been seen? */
+    private boolean seenRecord = false;
+
+    @Override
+    public void close() throws IOException {
+        return;
+    }
+
+    @Override
+    public float getProgress() throws IOException {
+        return (seenRecord ? 1f : 0f);
+    }
+
+    @Override
+    public Text getCurrentKey() throws IOException, InterruptedException {
+        return ONLY_KEY;
+    }
+
+    @Override
+    public Text getCurrentValue() throws IOException, InterruptedException {
+        return ONLY_VALUE;
+    }
+
+    @Override
+    public void initialize(InputSplit inputSplit, TaskAttemptContext context)
+        throws IOException, InterruptedException {
+    }
+
+    @Override
+    public boolean nextKeyValue() throws IOException, InterruptedException {
+	return (seenRecord ? false : (seenRecord = true));
+    }
+}
diff --git a/src/main/java/org/apache/giraph/bsp/BspRecordWriter.java b/src/main/java/org/apache/giraph/bsp/BspRecordWriter.java
new file mode 100644
index 0000000..3838c1a
--- /dev/null
+++ b/src/main/java/org/apache/giraph/bsp/BspRecordWriter.java
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.bsp;
+
+import java.io.IOException;
+
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.RecordWriter;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+
+/**
+ * Used by {@link BspOutputFormat} since some versions of Hadoop
+ * require that a RecordWriter is returned from getRecordWriter.
+ * Does nothing, except insures that write is never called.
+ */
+public class BspRecordWriter extends RecordWriter<Text, Text> {
+
+    @Override
+    public void close(TaskAttemptContext context)
+            throws IOException, InterruptedException {
+        // Do nothing
+    }
+
+    @Override
+    public void write(Text key, Text value)
+            throws IOException, InterruptedException {
+        throw new IOException("write: Cannot write with " +
+                              getClass().getName() +
+                              ".  Should never be called");
+    }
+}
diff --git a/src/main/java/org/apache/giraph/bsp/CentralizedService.java b/src/main/java/org/apache/giraph/bsp/CentralizedService.java
new file mode 100644
index 0000000..a72f142
--- /dev/null
+++ b/src/main/java/org/apache/giraph/bsp/CentralizedService.java
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.bsp;
+
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+
+import java.io.IOException;
+
+/**
+ * Basic service interface shared by both {@link CentralizedServiceMaster} and
+ * {@link CentralizedServiceWorker}.
+ */
+@SuppressWarnings("rawtypes")
+public interface CentralizedService<I extends WritableComparable,
+                                    V extends Writable,
+                                    E extends Writable,
+                                    M extends Writable> {
+    /**
+     * Setup (must be called prior to any other function)
+     */
+    void setup();
+
+    /**
+     * Get the current global superstep of the application to work on.
+     *
+     * @return global superstep (begins at INPUT_SUPERSTEP)
+     */
+    long getSuperstep();
+
+    /**
+     * Get the restarted superstep
+     *
+     * @return -1 if not manually restarted, otherwise the superstep id
+     */
+    long getRestartedSuperstep();
+
+    /**
+     * Given a superstep, should it be checkpointed based on the
+     * checkpoint frequency?
+     *
+     * @param superstep superstep to check against frequency
+     * @return true if checkpoint frequency met or superstep is 1.
+     */
+    boolean checkpointFrequencyMet(long superstep);
+
+    /**
+     * Clean up the service (no calls may be issued after this)
+     *
+     * @throws IOException
+     * @throws InterruptedException
+     */
+    void cleanup() throws IOException, InterruptedException;
+}
diff --git a/src/main/java/org/apache/giraph/bsp/CentralizedServiceMaster.java b/src/main/java/org/apache/giraph/bsp/CentralizedServiceMaster.java
new file mode 100644
index 0000000..9e44c1b
--- /dev/null
+++ b/src/main/java/org/apache/giraph/bsp/CentralizedServiceMaster.java
@@ -0,0 +1,91 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.bsp;
+
+import java.io.IOException;
+
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * At most, there will be one active master at a time, but many threads can
+ * be trying to be the active master.
+ */
+@SuppressWarnings("rawtypes")
+public interface CentralizedServiceMaster<
+        I extends WritableComparable,
+        V extends Writable,
+        E extends Writable,
+        M extends Writable>
+        extends CentralizedService<I, V, E, M> {
+    /**
+     * Become the master.
+     * @return true if became the master, false if the application is done.
+     */
+    boolean becomeMaster();
+
+    /**
+     * Create the {@link InputSplit} objects from the index range based on the
+     * user-defined VertexInputFormat.  The {@link InputSplit} objects will
+     * processed by the workers later on during the INPUT_SUPERSTEP.
+     *
+     * @return Number of partitions. Returns -1 on failure to create
+     *         valid input splits.
+     */
+    int createInputSplits();
+
+    /**
+     * Master coordinates the superstep
+     *
+     * @return State of the application as a result of this superstep
+     * @throws InterruptedException
+     * @throws KeeperException
+     */
+    SuperstepState coordinateSuperstep()
+        throws KeeperException, InterruptedException;
+
+    /**
+     * Master can decide to restart from the last good checkpoint if a
+     * worker fails during a superstep.
+     *
+     * @param checkpoint Checkpoint to restart from
+     */
+    void restartFromCheckpoint(long checkpoint);
+
+    /**
+     * Get the last known good checkpoint
+     * @throws IOException
+     */
+    long getLastGoodCheckpoint() throws IOException;
+
+    /**
+     * If the master decides that this job doesn't have the resources to
+     * continue, it can fail the job.  It can also designate what to do next.
+     * Typically this is mainly informative.
+     *
+     * @param state
+     * @param applicationAttempt attempt to start on
+     * @param desiredSuperstep Superstep to restart from (if applicable)
+     */
+    void setJobState(ApplicationState state,
+                     long applicationAttempt,
+                     long desiredSuperstep);
+}
diff --git a/src/main/java/org/apache/giraph/bsp/CentralizedServiceWorker.java b/src/main/java/org/apache/giraph/bsp/CentralizedServiceWorker.java
new file mode 100644
index 0000000..29068a6
--- /dev/null
+++ b/src/main/java/org/apache/giraph/bsp/CentralizedServiceWorker.java
@@ -0,0 +1,171 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.bsp;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+
+import org.apache.giraph.graph.AggregatorUsage;
+import org.apache.giraph.graph.BasicVertex;
+import org.apache.giraph.graph.GraphMapper;
+import org.apache.giraph.graph.partition.Partition;
+import org.apache.giraph.graph.partition.PartitionOwner;
+import org.apache.giraph.graph.partition.PartitionStats;
+import org.apache.giraph.graph.WorkerInfo;
+import org.apache.giraph.graph.WorkerContext;
+
+/**
+ * All workers should have access to this centralized service to
+ * execute the following methods.
+ */
+@SuppressWarnings("rawtypes")
+public interface CentralizedServiceWorker<
+        I extends WritableComparable,
+        V extends Writable,
+        E extends Writable,
+        M extends Writable>
+        extends CentralizedService<I, V, E, M>, AggregatorUsage {
+    /**
+     * Get the worker information
+     *
+     * @return Worker information
+     */
+    WorkerInfo getWorkerInfo();
+
+   /**
+    *
+    * @return worker's WorkerContext
+    */
+    WorkerContext getWorkerContext();
+
+    /**
+     * Get a map of the partition id to the partition for this worker.
+     * The partitions contain the vertices for
+     * this worker and can be used to run compute() for the vertices or do
+     * checkpointing.
+     *
+     * @return List of partitions that this worker owns.
+     */
+    Map<Integer, Partition<I, V, E, M>> getPartitionMap();
+
+    /**
+     * Get a collection of all the partition owners.
+     *
+     * @return Collection of all the partition owners.
+     */
+    Collection<? extends PartitionOwner> getPartitionOwners();
+
+    /**
+     *  Both the vertices and the messages need to be checkpointed in order
+     *  for them to be used.  This is done after all messages have been
+     *  delivered, but prior to a superstep starting.
+     */
+    void storeCheckpoint() throws IOException;
+
+    /**
+     * Load the vertices, edges, messages from the beginning of a superstep.
+     * Will load the vertex partitions as designated by the master and set the
+     * appropriate superstep.
+     *
+     * @param superstep which checkpoint to use
+     * @throws IOException
+     */
+    void loadCheckpoint(long superstep) throws IOException;
+
+    /**
+     * Take all steps prior to actually beginning the computation of a
+     * superstep.
+     *
+     * @return Collection of all the partition owners from the master for this
+     *         superstep.
+     */
+    Collection<? extends PartitionOwner> startSuperstep();
+
+    /**
+     * Worker is done with its portion of the superstep.  Report the
+     * worker level statistics after the computation.
+     *
+     * @param partitionStatsList All the partition stats for this worker
+     * @return true if this is the last superstep, false otherwise
+     */
+    boolean finishSuperstep(List<PartitionStats> partitionStatsList);
+    /**
+     * Get the partition that a vertex index would belong to
+     *
+     * @param vertexIndex Index of the vertex that is used to find the correct
+     *        partition.
+     * @return Correct partition if exists on this worker, null otherwise.
+     */
+    public Partition<I, V, E, M> getPartition(I vertexIndex);
+
+    /**
+     * Every client will need to get a partition owner from a vertex id so that
+     * they know which worker to sent the request to.
+     *
+     * @param superstep Superstep to look for
+     * @param vertexIndex Vertex index to look for
+     * @return PartitionOnwer that should contain this vertex if it exists
+     */
+    PartitionOwner getVertexPartitionOwner(I vertexIndex);
+
+    /**
+     * Look up a vertex on a worker given its vertex index.
+     *
+     * @param vertexIndex Vertex index to look for
+     * @return Vertex if it exists on this worker.
+     */
+    BasicVertex<I, V, E, M> getVertex(I vertexIndex);
+
+    /**
+     * If desired by the user, vertex partitions are redistributed among
+     * workers according to the chosen {@link GraphPartitioner}.
+     *
+     * @param masterSetPartitionOwners Partition owner info passed from the
+     *        master.
+     */
+    void exchangeVertexPartitions(
+        Collection<? extends PartitionOwner> masterSetPartitionOwners);
+
+    /**
+     * Assign messages to a vertex (bypasses package-private access to
+     * setMessages() for internal classes).
+     *
+     * @param vertex Vertex (owned by worker)
+     * @param messageIterator Messages to assign to the vertex
+     */
+    void assignMessagesToVertex(BasicVertex<I, V, E, M> vertex,
+                                Iterable<M> messageIterator);
+
+    /**
+     * Get the GraphMapper that this service is using.  Vertices need to know
+     * this.
+     *
+     * @return BspMapper
+     */
+    GraphMapper<I, V, E, M> getGraphMapper();
+
+    /**
+     * Operations that will be called if there is a failure by a worker.
+     */
+    void failureCleanup();
+}
diff --git a/src/main/java/org/apache/giraph/bsp/ImmutableOutputCommitter.java b/src/main/java/org/apache/giraph/bsp/ImmutableOutputCommitter.java
new file mode 100644
index 0000000..5a85bbd
--- /dev/null
+++ b/src/main/java/org/apache/giraph/bsp/ImmutableOutputCommitter.java
@@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.bsp;
+
+import java.io.IOException;
+
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.OutputCommitter;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+
+/**
+ * This output committer doesn't do anything, meant for the case
+ * where output isn't desired, or as a base for not using
+ * FileOutputCommitter.
+ */
+public class ImmutableOutputCommitter extends OutputCommitter {
+    @Override
+    public void abortTask(TaskAttemptContext context) throws IOException {
+    }
+
+    @Override
+    public void commitTask(TaskAttemptContext context) throws IOException {
+    }
+
+    @Override
+    public boolean needsTaskCommit(TaskAttemptContext context)
+            throws IOException {
+        return false;
+    }
+
+    @Override
+    public void setupJob(JobContext context) throws IOException {
+    }
+
+    @Override
+    public void setupTask(TaskAttemptContext context) throws IOException {
+    }
+
+    /*if[HADOOP_NON_SECURE]
+    @Override
+    public void cleanupJob(JobContext jobContext)  throws IOException {
+    }
+    else[HADOOP_NON_SECURE]*/
+    @Override
+    /*end[HADOOP_NON_SECURE]*/
+    public void commitJob(JobContext jobContext) throws IOException {
+    }
+}
diff --git a/src/main/java/org/apache/giraph/bsp/SuperstepState.java b/src/main/java/org/apache/giraph/bsp/SuperstepState.java
new file mode 100644
index 0000000..d61f1af
--- /dev/null
+++ b/src/main/java/org/apache/giraph/bsp/SuperstepState.java
@@ -0,0 +1,29 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.bsp;
+
+/**
+ * State of a coordinated superstep
+ */
+public enum SuperstepState {
+    INITIAL, ///< Nothing happened yet
+    WORKER_FAILURE, ///< A worker died during this superstep
+    THIS_SUPERSTEP_DONE, ///< This superstep completed correctly
+    ALL_SUPERSTEPS_DONE, ///< All supersteps are complete
+}
diff --git a/src/main/java/org/apache/giraph/comm/ArrayListWritable.java b/src/main/java/org/apache/giraph/comm/ArrayListWritable.java
new file mode 100644
index 0000000..034f39f
--- /dev/null
+++ b/src/main/java/org/apache/giraph/comm/ArrayListWritable.java
@@ -0,0 +1,113 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.comm;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.util.ReflectionUtils;
+
+/**
+ * A Writable for ListArray containing instances of a class.
+ */
+public abstract class ArrayListWritable<M extends Writable> extends ArrayList<M>
+          implements Writable, Configurable {
+    /** Used for instantiation */
+    private Class<M> refClass = null;
+    /** Defining a layout version for a serializable class. */
+    private static final long serialVersionUID = 1L;
+    /** Configuration */
+    private Configuration conf;
+
+    /**
+     * Using the default constructor requires that the user implement
+     * setClass(), guaranteed to be invoked prior to instantiation in
+     * readFields()
+     */
+    public ArrayListWritable() {
+    }
+
+    public ArrayListWritable(ArrayListWritable<M> arrayListWritable) {
+        super(arrayListWritable);
+    }
+
+    /**
+     * This constructor allows setting the refClass during construction.
+     *
+     * @param refClass internal type class
+     */
+    public ArrayListWritable(Class<M> refClass) {
+        super();
+        this.refClass = refClass;
+    }
+
+    /**
+     * This is a one-time operation to set the class type
+     *
+     * @param refClass internal type class
+     */
+    public void setClass(Class<M> refClass) {
+        if (this.refClass != null) {
+            throw new RuntimeException(
+                "setClass: refClass is already set to " +
+                this.refClass.getName());
+        }
+        this.refClass = refClass;
+    }
+
+    /**
+     * Subclasses must set the class type appropriately and can use
+     * setClass(Class<M> refClass) to do it.
+     */
+    public abstract void setClass();
+
+    public void readFields(DataInput in) throws IOException {
+        if (this.refClass == null) {
+            setClass();
+        }
+        int numValues = in.readInt();            // read number of values
+        ensureCapacity(numValues);
+        for (int i = 0; i < numValues; i++) {
+            M value = ReflectionUtils.newInstance(refClass, conf);
+            value.readFields(in);                // read a value
+            add(value);                          // store it in values
+        }
+    }
+
+    public void write(DataOutput out) throws IOException {
+        int numValues = size();
+        out.writeInt(numValues);                 // write number of values
+        for (int i = 0; i < numValues; i++) {
+            get(i).write(out);
+        }
+    }
+
+    public final Configuration getConf() {
+        return conf;
+    }
+
+    public final void setConf(Configuration conf) {
+        this.conf = conf;
+    }
+}
diff --git a/src/main/java/org/apache/giraph/comm/BasicRPCCommunications.java b/src/main/java/org/apache/giraph/comm/BasicRPCCommunications.java
new file mode 100644
index 0000000..fc9d140
--- /dev/null
+++ b/src/main/java/org/apache/giraph/comm/BasicRPCCommunications.java
@@ -0,0 +1,1224 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.comm;
+
+import org.apache.giraph.bsp.CentralizedServiceWorker;
+import org.apache.giraph.graph.BasicVertex;
+import org.apache.giraph.graph.BspUtils;
+import org.apache.giraph.graph.Edge;
+import org.apache.giraph.graph.GiraphJob;
+import org.apache.giraph.graph.MutableVertex;
+import org.apache.giraph.graph.VertexCombiner;
+import org.apache.giraph.graph.VertexMutations;
+import org.apache.giraph.graph.VertexResolver;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.ipc.RPC;
+import org.apache.hadoop.ipc.RPC.Server;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.log4j.Logger;
+
+import java.io.IOException;
+import java.net.BindException;
+import java.net.InetAddress;
+import java.net.InetSocketAddress;
+import java.net.UnknownHostException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.TreeSet;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+
+import org.apache.giraph.graph.WorkerInfo;
+import org.apache.giraph.graph.partition.Partition;
+import org.apache.giraph.graph.partition.PartitionOwner;
+import org.apache.giraph.utils.MemoryUtils;
+
+import com.google.common.collect.Iterables;
+
+/*if[HADOOP_FACEBOOK]
+import org.apache.hadoop.ipc.ProtocolSignature;
+end[HADOOP_FACEBOOK]*/
+
+@SuppressWarnings("rawtypes")
+public abstract class BasicRPCCommunications<
+        I extends WritableComparable,
+        V extends Writable,
+        E extends Writable,
+        M extends Writable, J>
+        implements CommunicationsInterface<I, V, E, M>,
+        ServerInterface<I, V, E, M> {
+    /** Class logger */
+    private static final Logger LOG =
+        Logger.getLogger(BasicRPCCommunications.class);
+    /** Indicates whether in superstep preparation */
+    private boolean inPrepareSuperstep = false;
+    /** Local hostname */
+    private final String localHostname;
+    /** Name of RPC server, == myAddress.toString() */
+    private final String myName;
+    /** RPC server */
+    private Server server;
+    /** Centralized service, needed to get vertex ranges */
+    private final CentralizedServiceWorker<I, V, E, M> service;
+    /** Hadoop configuration */
+    protected final Configuration conf;
+    /** Combiner instance, can be null */
+    private final VertexCombiner<I, M> combiner;
+    /** Address of RPC server */
+    private InetSocketAddress myAddress;
+    /** Messages sent during the last superstep */
+    private long totalMsgsSentInSuperstep = 0;
+    /** Maximum messages sent per putVertexIdMessagesList RPC */
+    private final int maxMessagesPerFlushPut;
+    /**
+     * Map of the peer connections, mapping from remote socket address to client
+     * meta data
+     */
+    private final Map<InetSocketAddress, PeerConnection> peerConnections =
+        new HashMap<InetSocketAddress, PeerConnection>();
+    /**
+     * Cached map of partition ids to remote socket address.  Needs to be
+     * synchronized.
+     */
+    private final Map<Integer, InetSocketAddress> partitionIndexAddressMap =
+        new HashMap<Integer, InetSocketAddress>();
+    /**
+     * Thread pool for message flush threads
+     */
+    private final ExecutorService executor;
+    /**
+     * Map of outbound messages, mapping from remote server to
+     * destination vertex index to list of messages
+     * (Synchronized between peer threads and main thread for each internal
+     *  map)
+     */
+    private final Map<InetSocketAddress, Map<I, MsgList<M>>> outMessages =
+        new HashMap<InetSocketAddress, Map<I, MsgList<M>>>();
+    /**
+     * Map of incoming messages, mapping from vertex index to list of messages.
+     * Only accessed by the main thread (no need to synchronize).
+     */
+    private final Map<I, List<M>> inMessages = new HashMap<I, List<M>>();
+    /**
+     * Map of inbound messages, mapping from vertex index to list of messages.
+     * Transferred to inMessages at beginning of a superstep.  This
+     * intermediary step exists so that the combiner will run not only at the
+     * client, but also at the server. Also, allows the sending of large
+     * message lists during the superstep computation. (Synchronized)
+     */
+    private final Map<I, List<M>> transientInMessages =
+        new HashMap<I, List<M>>();
+    /**
+     * Map of partition ids to incoming vertices from other workers.
+     * (Synchronized)
+     */
+    private final Map<Integer, List<BasicVertex<I, V, E, M>>>
+        inPartitionVertexMap =
+            new HashMap<Integer, List<BasicVertex<I, V, E, M>>>();
+
+    /**
+     * Map from vertex index to all vertex mutations
+     */
+    private final Map<I, VertexMutations<I, V, E, M>>
+        inVertexMutationsMap =
+            new HashMap<I, VertexMutations<I, V, E, M>>();
+
+    /** Maximum size of cached message list, before sending it out */
+    private final int maxSize;
+    /** Cached job id */
+    private final String jobId;
+    /** Cached job token */
+    private final J jobToken;
+    /** Maximum number of vertices sent in a single RPC */
+    private static final int MAX_VERTICES_PER_RPC = 1024;
+
+    /**
+     * PeerConnection contains RPC client and accumulated messages
+     * for a specific peer.
+     */
+    private class PeerConnection {
+        /**
+         * Map of outbound messages going to a particular remote server,
+         * mapping from the destination vertex to a list of messages.
+         * (Synchronized with itself).
+         */
+        private final Map<I, MsgList<M>> outMessagesPerPeer;
+        /**
+         * Client interface: RPC proxy for remote server, this class for local
+         */
+        private final CommunicationsInterface<I, V, E, M> peer;
+        /** Boolean, set to false when local client (self), true otherwise */
+        private final boolean isProxy;
+
+        public PeerConnection(Map<I, MsgList<M>> m,
+            CommunicationsInterface<I, V, E, M> i,
+            boolean isProxy) {
+
+            this.outMessagesPerPeer = m;
+            this.peer = i;
+            this.isProxy = isProxy;
+        }
+
+        public void close() {
+            if (LOG.isDebugEnabled()) {
+                LOG.debug("close: Done");
+            }
+        }
+
+        public CommunicationsInterface<I, V, E, M> getRPCProxy() {
+            return peer;
+        }
+
+        @Override
+        public String toString() {
+            return peer.getName() + ", proxy=" + isProxy;
+        }
+    }
+
+    private class PeerFlushExecutor implements Runnable {
+        private final PeerConnection peerConnection;
+        private final Mapper<?, ?, ?, ?>.Context context;
+        // Report on the status of this flusher if this interval was exceeded
+        private static final int REPORTING_INTERVAL_MIN_MILLIS = 60000;
+
+        PeerFlushExecutor(PeerConnection peerConnection,
+                          Mapper<?, ?, ?, ?>.Context context) {
+            this.peerConnection = peerConnection;
+            this.context = context;
+        }
+
+        @Override
+        public void run() {
+            CommunicationsInterface<I, V, E, M> proxy
+                = peerConnection.getRPCProxy();
+            long startMillis = System.currentTimeMillis();
+            long lastReportedMillis = startMillis;
+            try {
+                int verticesDone = 0;
+                synchronized(peerConnection.outMessagesPerPeer) {
+                    final int vertices =
+                        peerConnection.outMessagesPerPeer.size();
+                    // 1. Check for null messages and combine if possible
+                    // 2. Send vertex ids and messages in bulk to the
+                    //    destination servers.
+                    for (Entry<I, MsgList<M>> entry :
+                            peerConnection.outMessagesPerPeer.entrySet()) {
+                        for (M msg : entry.getValue()) {
+                            if (msg == null) {
+                                throw new IllegalArgumentException(
+                                    "run: Cannot put null message on " +
+                                    "vertex id " + entry.getKey());
+                            }
+                        }
+                        if (combiner != null && entry.getValue().size() > 1) {
+                            Iterable<M> messages = combiner.combine(
+                                    entry.getKey(), entry.getValue());
+                            if (messages == null) {
+                                throw new IllegalStateException(
+                                        "run: Combiner cannot return null");
+                            }
+                            if (Iterables.size(entry.getValue()) <
+                                    Iterables.size(messages)) {
+                                throw new IllegalStateException(
+                                        "run: The number of combined " +
+                                        "messages is required to be <= to " +
+                                        "number of messages to be combined");
+                            }
+                            entry.getValue().clear();
+                            for (M msg: messages) {
+                                entry.getValue().add(msg);
+                            }
+                        }
+                        if (entry.getValue().isEmpty()) {
+                            throw new IllegalStateException(
+                                "run: Impossible for no messages in " +
+                                entry.getKey());
+                        }
+                    }
+                    while (!peerConnection.outMessagesPerPeer.isEmpty()) {
+                        int bulkedMessages = 0;
+                        Iterator<Entry<I, MsgList<M>>> vertexIdMessagesListIt =
+                            peerConnection.outMessagesPerPeer.entrySet().
+                            iterator();
+                        VertexIdMessagesList<I, M> vertexIdMessagesList =
+                            new VertexIdMessagesList<I, M>();
+                        while (vertexIdMessagesListIt.hasNext()) {
+                            Entry<I, MsgList<M>> entry =
+                                vertexIdMessagesListIt.next();
+                            // Add this entry if the list is empty or we
+                            // haven't reached the maximum number of messages
+                            if (vertexIdMessagesList.isEmpty() ||
+                                    ((bulkedMessages + entry.getValue().size())
+                                     < maxMessagesPerFlushPut)) {
+                                vertexIdMessagesList.add(
+                                    new VertexIdMessages<I, M>(
+                                        entry.getKey(), entry.getValue()));
+                                bulkedMessages += entry.getValue().size();
+                            }
+                        }
+
+                        // Clean up references to the vertex id and messages
+                        for (VertexIdMessages<I, M>vertexIdMessages :
+                                vertexIdMessagesList) {
+                            peerConnection.outMessagesPerPeer.remove(
+                                vertexIdMessages.getVertexId());
+                        }
+
+                        proxy.putVertexIdMessagesList(vertexIdMessagesList);
+                        context.progress();
+
+                        verticesDone += vertexIdMessagesList.size();
+                        long curMillis = System.currentTimeMillis();
+                        if ((lastReportedMillis +
+                                REPORTING_INTERVAL_MIN_MILLIS) < curMillis) {
+                            lastReportedMillis = curMillis;
+                            if (LOG.isInfoEnabled()) {
+                                float percentDone =
+                                    (100f * verticesDone) /
+                                    vertices;
+                                float minutesUsed =
+                                    (curMillis - startMillis) / 1000f / 60f;
+                                float minutesRemaining =
+                                    (minutesUsed * 100f / percentDone) -
+                                    minutesUsed;
+                                LOG.info("run: " + peerConnection + ", " +
+                                         verticesDone + " out of " +
+                                         vertices  +
+                                         " done in " + minutesUsed +
+                                         " minutes, " +
+                                         percentDone + "% done, ETA " +
+                                         minutesRemaining +
+                                         " minutes remaining, " +
+                                         MemoryUtils.getRuntimeMemoryStats());
+                            }
+                        }
+                    }
+                }
+
+                if (LOG.isDebugEnabled()) {
+                    LOG.debug("run: " + proxy.getName() +
+                        ": all messages flushed");
+                }
+            } catch (IOException e) {
+                LOG.error(e);
+                if (peerConnection.isProxy) {
+                    RPC.stopProxy(peerConnection.peer);
+                }
+                throw new RuntimeException(e);
+            }
+        }
+    }
+
+    /**
+     * LargeMessageFlushExecutor flushes all outgoing messages destined to some vertices.
+     * This is executed when the number of messages destined to certain vertex
+     * exceeds <i>maxSize</i>.
+     */
+    private class LargeMessageFlushExecutor implements Runnable {
+        private final I destVertex;
+        private final MsgList<M> outMessageList;
+        private PeerConnection peerConnection;
+
+        LargeMessageFlushExecutor(PeerConnection peerConnection, I destVertex) {
+            this.peerConnection = peerConnection;
+            synchronized(peerConnection.outMessagesPerPeer) {
+                this.destVertex = destVertex;
+                outMessageList =
+                    peerConnection.outMessagesPerPeer.get(destVertex);
+                peerConnection.outMessagesPerPeer.remove(destVertex);
+            }
+        }
+
+        @Override
+        public void run() {
+            try {
+                CommunicationsInterface<I, V, E, M> proxy =
+                    peerConnection.getRPCProxy();
+
+                if (combiner != null) {
+                    Iterable<M> messages = combiner.combine(destVertex,
+                                                            outMessageList);
+                    if (messages == null) {
+                        throw new IllegalStateException(
+                                "run: Combiner cannot return null");
+                    }
+                    if (Iterables.size(outMessageList) <
+                            Iterables.size(messages)) {
+                        throw new IllegalStateException(
+                                "run: The number of combined messages is " +
+                                "required to be <= to the number of " +
+                                "messages to be combined");
+                    }
+                    for (M msg: messages) {
+                        proxy.putMsg(destVertex, msg);
+                    }
+                } else {
+                    proxy.putMsgList(destVertex, outMessageList);
+                }
+            } catch (IOException e) {
+                LOG.error(e);
+                if (peerConnection.isProxy) {
+                    RPC.stopProxy(peerConnection.peer);
+                }
+                throw new RuntimeException("run: Got IOException", e);
+            } finally {
+                outMessageList.clear();
+            }
+        }
+    }
+
+    private void submitLargeMessageSend(InetSocketAddress addr, I destVertex) {
+        PeerConnection pc = peerConnections.get(addr);
+        executor.execute(new LargeMessageFlushExecutor(pc, destVertex));
+    }
+
+    protected abstract J createJobToken() throws IOException;
+
+    protected abstract Server getRPCServer(
+        InetSocketAddress addr,
+        int numHandlers, String jobId, J jobToken) throws IOException;
+
+    /**
+     * Only constructor.
+     *
+     * @param context Context for getting configuration
+     * @param service Service worker to get the vertex ranges
+     * @throws IOException
+     * @throws UnknownHostException
+     * @throws InterruptedException
+     */
+    public BasicRPCCommunications(Mapper<?, ?, ?, ?>.Context context,
+                                  CentralizedServiceWorker<I, V, E, M> service)
+            throws IOException, UnknownHostException, InterruptedException {
+        this.service = service;
+        this.conf = context.getConfiguration();
+        this.maxSize = conf.getInt(GiraphJob.MSG_SIZE,
+                                   GiraphJob.MSG_SIZE_DEFAULT);
+        this.maxMessagesPerFlushPut =
+            conf.getInt(GiraphJob.MAX_MESSAGES_PER_FLUSH_PUT,
+                        GiraphJob.DEFAULT_MAX_MESSAGES_PER_FLUSH_PUT);
+        if (BspUtils.getVertexCombinerClass(conf) == null) {
+            this.combiner = null;
+        } else {
+            this.combiner = BspUtils.createVertexCombiner(conf);
+        }
+
+        this.localHostname = InetAddress.getLocalHost().getHostName();
+        int taskId = conf.getInt("mapred.task.partition", -1);
+        int numTasks = conf.getInt("mapred.map.tasks", 1);
+
+
+
+        int numHandlers = conf.getInt(GiraphJob.RPC_NUM_HANDLERS,
+                                      GiraphJob.RPC_NUM_HANDLERS_DEFAULT);
+        if (numTasks < numHandlers) {
+            numHandlers = numTasks;
+        }
+        this.jobToken = createJobToken();
+        this.jobId = context.getJobID().toString();
+
+        int numWorkers = conf.getInt(GiraphJob.MAX_WORKERS, numTasks);
+        // If the number of flush threads is unset, it is set to
+        // the number of max workers - 1 or a minimum of 1.
+        int numFlushThreads =
+             Math.max(conf.getInt(GiraphJob.MSG_NUM_FLUSH_THREADS,
+                                  numWorkers - 1),
+                      1);
+        this.executor = Executors.newFixedThreadPool(numFlushThreads);
+
+        // Simple handling of port collisions on the same machine while
+        // preserving debugability from the port number alone.
+        // Round up the max number of workers to the next power of 10 and use
+        // it as a constant to increase the port number with.
+        int portIncrementConstant =
+            (int) Math.pow(10, Math.ceil(Math.log10(numWorkers)));
+        String bindAddress = localHostname;
+        int bindPort = conf.getInt(GiraphJob.RPC_INITIAL_PORT,
+                                   GiraphJob.RPC_INITIAL_PORT_DEFAULT) +
+                                   taskId;
+        int bindAttempts = 0;
+        final int maxRpcPortBindAttempts =
+            conf.getInt(GiraphJob.MAX_RPC_PORT_BIND_ATTEMPTS,
+                        GiraphJob.MAX_RPC_PORT_BIND_ATTEMPTS_DEFAULT);
+        while (bindAttempts < maxRpcPortBindAttempts) {
+            this.myAddress = new InetSocketAddress(bindAddress, bindPort);
+            try {
+                this.server =
+                    getRPCServer(
+                        myAddress, numHandlers, this.jobId, this.jobToken);
+                break;
+            } catch (BindException e) {
+                LOG.info("BasicRPCCommunications: Failed to bind with port " +
+                         bindPort + " on bind attempt " + bindAttempts);
+                ++bindAttempts;
+                bindPort += portIncrementConstant;
+            }
+        }
+        if (bindAttempts == maxRpcPortBindAttempts) {
+            throw new IllegalStateException(
+                "BasicRPCCommunications: Failed to start RPCServer with " +
+                maxRpcPortBindAttempts + " attempts");
+        }
+
+        this.server.start();
+        this.myName = myAddress.toString();
+
+        if (LOG.isInfoEnabled()) {
+            LOG.info("BasicRPCCommunications: Started RPC " +
+                     "communication server: " + myName + " with " +
+                     numHandlers + " handlers and " + numFlushThreads +
+                     " flush threads on bind attempt " + bindAttempts);
+        }
+    }
+
+    /**
+     * Get the final port of the RPC server that it bound to.
+     *
+     * @return Port that RPC server was bound to.
+     */
+    public int getPort() {
+        return myAddress.getPort();
+    }
+
+    @Override
+    public void setup() {
+        try {
+            connectAllRPCProxys(this.jobId, this.jobToken);
+        } catch (IOException e) {
+            throw new IllegalStateException("setup: Got IOException", e);
+        } catch (InterruptedException e) {
+            throw new IllegalStateException("setup: Got InterruptedException",
+                                            e);
+        }
+    }
+
+    protected abstract CommunicationsInterface<I, V, E, M> getRPCProxy(
+        final InetSocketAddress addr, String jobId, J jobToken)
+        throws IOException, InterruptedException;
+
+    /**
+     * Establish connections to every RPC proxy server that will be used in
+     * the upcoming messaging.  This method is idempotent.
+     *
+     * @param jobId Stringified job id
+     * @param jobToken used for
+     * @throws InterruptedException
+     * @throws IOException
+     */
+    private void connectAllRPCProxys(String jobId, J jobToken)
+            throws IOException, InterruptedException {
+        final int maxTries = 5;
+        for (PartitionOwner partitionOwner : service.getPartitionOwners()) {
+            int tries = 0;
+            while (tries < maxTries) {
+                try {
+                    startPeerConnectionThread(
+                        partitionOwner.getWorkerInfo(), jobId, jobToken);
+                    break;
+                } catch (IOException e) {
+                    LOG.warn("connectAllRPCProxys: Failed on attempt " +
+                             tries + " of " + maxTries +
+                             " to connect to " + partitionOwner.toString(), e);
+                    ++tries;
+                }
+            }
+        }
+    }
+
+    /**
+     * Creates the connections to remote RPCs if any only if the inet socket
+     * address doesn't already exist.
+     *
+     * @param workerInfo My worker info
+     * @param jobId Id of the job
+     * @param jobToken Required for secure Hadoop
+     * @throws IOException
+     * @throws InterruptedException
+     */
+    private void startPeerConnectionThread(WorkerInfo workerInfo,
+                                           String jobId,
+                                           J jobToken)
+            throws IOException, InterruptedException {
+        if (LOG.isDebugEnabled()) {
+            LOG.debug("startPeerConnectionThread: hostname " +
+                      workerInfo.getHostname() + ", port " +
+                      workerInfo.getPort());
+        }
+        final InetSocketAddress addr =
+            new InetSocketAddress(workerInfo.getHostname(),
+                                  workerInfo.getPort());
+        // Cheap way to hold both the hostname and port (rather than
+        // make a class)
+        InetSocketAddress addrUnresolved =
+            InetSocketAddress.createUnresolved(addr.getHostName(),
+                                               addr.getPort());
+        Map<I, MsgList<M>> outMsgMap = null;
+        boolean isProxy = true;
+        CommunicationsInterface<I, V, E, M> peer = this;
+        synchronized(outMessages) {
+            outMsgMap = outMessages.get(addrUnresolved);
+            if (LOG.isDebugEnabled()) {
+                LOG.debug("startPeerConnectionThread: Connecting to " +
+                          workerInfo.toString() + ", addr = " + addr +
+                          " if outMsgMap (" + outMsgMap + ") == null ");
+            }
+            if (outMsgMap != null) { // this host has already been added
+                return;
+            }
+
+            if (myName.equals(addr.toString())) {
+                isProxy = false;
+            } else {
+                peer = getRPCProxy(addr, jobId, jobToken);
+            }
+
+            outMsgMap = new HashMap<I, MsgList<M>>();
+            outMessages.put(addrUnresolved, outMsgMap);
+        }
+
+        PeerConnection peerConnection =
+            new PeerConnection(outMsgMap, peer, isProxy);
+        peerConnections.put(addrUnresolved, peerConnection);
+    }
+
+    @Override
+    public final long getProtocolVersion(String protocol, long clientVersion)
+            throws IOException {
+        return versionID;
+    }
+
+/*if[HADOOP_FACEBOOK]
+    public ProtocolSignature getProtocolSignature(
+            String protocol,
+            long clientVersion,
+            int clientMethodsHash) throws IOException {
+        return new ProtocolSignature(versionID, null);
+    }
+end[HADOOP_FACEBOOK]*/
+
+    @Override
+    public void closeConnections() throws IOException {
+        for(PeerConnection pc : peerConnections.values()) {
+            pc.close();
+        }
+    }
+
+
+    @Override
+    public final void close() {
+        LOG.info("close: shutting down RPC server");
+        server.stop();
+    }
+
+    @Override
+    public final void putMsg(I vertex, M msg) throws IOException {
+        List<M> msgs = null;
+        if (LOG.isDebugEnabled()) {
+        	LOG.debug("putMsg: Adding msg " + msg + " on vertex " + vertex);
+        }
+        if (inPrepareSuperstep) {
+            // Called by combiner (main thread) during superstep preparation
+            msgs = inMessages.get(vertex);
+            if (msgs == null) {
+                msgs = new ArrayList<M>();
+                inMessages.put(vertex, msgs);
+            }
+            msgs.add(msg);
+        }
+        else {
+            synchronized(transientInMessages) {
+                msgs = transientInMessages.get(vertex);
+                if (msgs == null) {
+                    msgs = new ArrayList<M>();
+                    transientInMessages.put(vertex, msgs);
+                }
+            }
+            synchronized(msgs) {
+                msgs.add(msg);
+            }
+        }
+    }
+
+    @Override
+    public final void putMsgList(I vertex,
+                                 MsgList<M> msgList) throws IOException {
+        List<M> msgs = null;
+        if (LOG.isDebugEnabled()) {
+        	LOG.debug("putMsgList: Adding msgList " + msgList +
+        			" on vertex " + vertex);
+        }
+        synchronized(transientInMessages) {
+            msgs = transientInMessages.get(vertex);
+            if (msgs == null) {
+                msgs = new ArrayList<M>(msgList.size());
+                transientInMessages.put(vertex, msgs);
+            }
+        }
+        synchronized(msgs) {
+            msgs.addAll(msgList);
+        }
+    }
+
+    @Override
+    public final void putVertexIdMessagesList(
+            VertexIdMessagesList<I, M> vertexIdMessagesList)
+            throws IOException {
+        if (LOG.isDebugEnabled()) {
+            LOG.debug("putVertexIdMessagesList: Adding msgList " +
+                      vertexIdMessagesList);
+        }
+
+        List<M> messageList = null;
+        for (VertexIdMessages<I, M> vertexIdMessages : vertexIdMessagesList) {
+            synchronized(transientInMessages) {
+                messageList =
+                    transientInMessages.get(vertexIdMessages.getVertexId());
+                if (messageList == null) {
+                    messageList = new ArrayList<M>(
+                        vertexIdMessages.getMessageList().size());
+                    transientInMessages.put(
+                        vertexIdMessages.getVertexId(), messageList);
+                }
+            }
+            synchronized(messageList) {
+                messageList.addAll(vertexIdMessages.getMessageList());
+            }
+        }
+    }
+
+    @Override
+    public final void putVertexList(int partitionId,
+                                    VertexList<I, V, E, M> vertexList)
+            throws IOException {
+        if (LOG.isDebugEnabled()) {
+            LOG.debug("putVertexList: On partition id " + partitionId +
+                      " adding vertex list of size " + vertexList.size());
+        }
+        synchronized(inPartitionVertexMap) {
+            if (vertexList.size() == 0) {
+                return;
+            }
+            if (!inPartitionVertexMap.containsKey(partitionId)) {
+                inPartitionVertexMap.put(partitionId,
+                    new ArrayList<BasicVertex<I, V, E, M>>(vertexList));
+            } else {
+                List<BasicVertex<I, V, E, M>> tmpVertexList =
+                    inPartitionVertexMap.get(partitionId);
+                tmpVertexList.addAll(vertexList);
+            }
+        }
+    }
+
+    @Override
+    public final void addEdge(I vertexIndex, Edge<I, E> edge) {
+        if (LOG.isDebugEnabled()) {
+            LOG.debug("addEdge: Adding edge " + edge);
+        }
+        synchronized(inVertexMutationsMap) {
+            VertexMutations<I, V, E, M> vertexMutations = null;
+            if (!inVertexMutationsMap.containsKey(vertexIndex)) {
+                vertexMutations = new VertexMutations<I, V, E, M>();
+                inVertexMutationsMap.put(vertexIndex, vertexMutations);
+            } else {
+                vertexMutations = inVertexMutationsMap.get(vertexIndex);
+            }
+            vertexMutations.addEdge(edge);
+        }
+    }
+
+    @Override
+    public void removeEdge(I vertexIndex, I destinationVertexIndex) {
+        if (LOG.isDebugEnabled()) {
+            LOG.debug("removeEdge: Removing edge on destination " +
+                      destinationVertexIndex);
+        }
+        synchronized(inVertexMutationsMap) {
+            VertexMutations<I, V, E, M> vertexMutations = null;
+            if (!inVertexMutationsMap.containsKey(vertexIndex)) {
+                vertexMutations = new VertexMutations<I, V, E, M>();
+                inVertexMutationsMap.put(vertexIndex, vertexMutations);
+            } else {
+                vertexMutations = inVertexMutationsMap.get(vertexIndex);
+            }
+            vertexMutations.removeEdge(destinationVertexIndex);
+        }
+    }
+
+    @Override
+    public final void addVertex(BasicVertex<I, V, E, M> vertex) {
+        if (LOG.isDebugEnabled()) {
+            LOG.debug("addVertex: Adding vertex " + vertex);
+        }
+        synchronized(inVertexMutationsMap) {
+            VertexMutations<I, V, E, M> vertexMutations = null;
+            if (!inVertexMutationsMap.containsKey(vertex.getVertexId())) {
+                vertexMutations = new VertexMutations<I, V, E, M>();
+                inVertexMutationsMap.put(vertex.getVertexId(), vertexMutations);
+            } else {
+                vertexMutations = inVertexMutationsMap.get(vertex.getVertexId());
+            }
+            vertexMutations.addVertex(vertex);
+        }
+    }
+
+    @Override
+    public void removeVertex(I vertexIndex) {
+        if (LOG.isDebugEnabled()) {
+            LOG.debug("removeVertex: Removing vertex " + vertexIndex);
+        }
+        synchronized(inVertexMutationsMap) {
+            VertexMutations<I, V, E, M> vertexMutations = null;
+            if (!inVertexMutationsMap.containsKey(vertexIndex)) {
+                vertexMutations = new VertexMutations<I, V, E, M>();
+                inVertexMutationsMap.put(vertexIndex, vertexMutations);
+            } else {
+                vertexMutations = inVertexMutationsMap.get(vertexIndex);
+            }
+            vertexMutations.removeVertex();
+        }
+    }
+
+    @Override
+    public final void sendPartitionReq(WorkerInfo workerInfo,
+                                       Partition<I, V, E, M> partition) {
+        // Internally, break up the sending so that the list doesn't get too
+        // big.
+        VertexList<I, V, E, M> hadoopVertexList =
+            new VertexList<I, V, E, M>();
+        InetSocketAddress addr =
+            getInetSocketAddress(workerInfo, partition.getPartitionId());
+        CommunicationsInterface<I, V, E, M> rpcProxy =
+            peerConnections.get(addr).getRPCProxy();
+
+        if (LOG.isInfoEnabled()) {
+            LOG.info("sendPartitionReq: Sending to " + rpcProxy.getName() +
+                     " " + addr + " from " + workerInfo +
+                     ", with partition " + partition);
+        }
+        for (BasicVertex<I, V, E, M> vertex : partition.getVertices()) {
+            hadoopVertexList.add(vertex);
+            if (hadoopVertexList.size() >= MAX_VERTICES_PER_RPC) {
+                try {
+                    rpcProxy.putVertexList(partition.getPartitionId(),
+                                           hadoopVertexList);
+                } catch (IOException e) {
+                    throw new RuntimeException(e);
+                }
+                hadoopVertexList.clear();
+            }
+        }
+        if (hadoopVertexList.size() > 0) {
+            try {
+                rpcProxy.putVertexList(partition.getPartitionId(),
+                                       hadoopVertexList);
+            } catch (IOException e) {
+                throw new RuntimeException(e);
+            }
+        }
+    }
+
+    /**
+     * Fill the socket address cache for the worker info and its partition.
+     *
+     * @param workerInfo Worker information to get the socket address
+     * @param partitionId
+     * @return address of the vertex range server containing this vertex
+     */
+    private InetSocketAddress getInetSocketAddress(WorkerInfo workerInfo,
+                                                   int partitionId) {
+        synchronized(partitionIndexAddressMap) {
+            InetSocketAddress address =
+                partitionIndexAddressMap.get(partitionId);
+            if (address == null) {
+                address = InetSocketAddress.createUnresolved(
+                    workerInfo.getHostname(),
+                    workerInfo.getPort());
+                partitionIndexAddressMap.put(partitionId, address);
+            }
+
+            if (address.getPort() != workerInfo.getPort() ||
+                    !address.getHostName().equals(workerInfo.getHostname())) {
+                throw new IllegalStateException(
+                    "getInetSocketAddress: Impossible that address " +
+                    address + " does not match " + workerInfo);
+            }
+
+            return address;
+        }
+    }
+
+    /**
+     * Fill the socket address cache for the partition owner.
+     *
+     * @param destVertex vertex to be sent
+     * @return address of the vertex range server containing this vertex
+     */
+    private InetSocketAddress getInetSocketAddress(I destVertex) {
+        PartitionOwner partitionOwner =
+            service.getVertexPartitionOwner(destVertex);
+        return getInetSocketAddress(partitionOwner.getWorkerInfo(),
+                                    partitionOwner.getPartitionId());
+    }
+
+    @Override
+    public final void sendMessageReq(I destVertex, M msg) {
+        InetSocketAddress addr = getInetSocketAddress(destVertex);
+        if (LOG.isDebugEnabled()) {
+            LOG.debug("sendMessage: Send bytes (" + msg.toString() +
+                      ") to " + destVertex + " with address " + addr);
+        }
+        ++totalMsgsSentInSuperstep;
+        Map<I, MsgList<M>> msgMap = null;
+        synchronized(outMessages) {
+            msgMap = outMessages.get(addr);
+        }
+        if (msgMap == null) { // should never happen after constructor
+            throw new RuntimeException(
+                "sendMessage: msgMap did not exist for " + addr +
+                " for vertex " + destVertex);
+        }
+
+        synchronized(msgMap) {
+            MsgList<M> msgList = msgMap.get(destVertex);
+            if (msgList == null) { // should only happen once
+                msgList = new MsgList<M>();
+                msgMap.put(destVertex, msgList);
+            }
+            msgList.add(msg);
+            if (LOG.isDebugEnabled()) {
+                LOG.debug("sendMessage: added msg=" + msg + ", size=" +
+                          msgList.size());
+            }
+            if (msgList.size() > maxSize) {
+                submitLargeMessageSend(addr, destVertex);
+            }
+        }
+    }
+
+    @Override
+    public final void addEdgeReq(I destVertex, Edge<I, E> edge)
+            throws IOException {
+        InetSocketAddress addr = getInetSocketAddress(destVertex);
+        if (LOG.isDebugEnabled()) {
+            LOG.debug("addEdgeReq: Add edge (" + edge.toString() + ") to " +
+                      destVertex + " with address " + addr);
+        }
+        CommunicationsInterface<I, V, E, M> rpcProxy =
+            peerConnections.get(addr).getRPCProxy();
+        rpcProxy.addEdge(destVertex, edge);
+    }
+
+    @Override
+    public final void removeEdgeReq(I vertexIndex, I destVertexIndex)
+            throws IOException {
+        InetSocketAddress addr = getInetSocketAddress(vertexIndex);
+        if (LOG.isDebugEnabled()) {
+            LOG.debug("removeEdgeReq: remove edge (" + destVertexIndex +
+                      ") from" + vertexIndex + " with address " + addr);
+        }
+        CommunicationsInterface<I, V, E, M> rpcProxy =
+            peerConnections.get(addr).getRPCProxy();
+        rpcProxy.removeEdge(vertexIndex, destVertexIndex);
+    }
+
+    @Override
+    public final void addVertexReq(BasicVertex<I, V, E, M> vertex)
+            throws IOException {
+        InetSocketAddress addr = getInetSocketAddress(vertex.getVertexId());
+        if (LOG.isDebugEnabled()) {
+            LOG.debug("addVertexReq: Add vertex (" + vertex + ") " +
+                      " with address " + addr);
+        }
+        CommunicationsInterface<I, V, E, M> rpcProxy =
+            peerConnections.get(addr).getRPCProxy();
+        rpcProxy.addVertex(vertex);
+    }
+
+    @Override
+    public void removeVertexReq(I vertexIndex) throws IOException {
+        InetSocketAddress addr =
+            getInetSocketAddress(vertexIndex);
+        if (LOG.isDebugEnabled()) {
+            LOG.debug("removeVertexReq: Remove vertex index ("
+                      + vertexIndex + ")  with address " + addr);
+        }
+        CommunicationsInterface<I, V, E, M> rpcProxy =
+            peerConnections.get(addr).getRPCProxy();
+        rpcProxy.removeVertex(vertexIndex);
+    }
+
+    @Override
+    public long flush(Mapper<?, ?, ?, ?>.Context context) throws IOException {
+        if (LOG.isInfoEnabled()) {
+            LOG.info("flush: starting for superstep " +
+                      service.getSuperstep() + " " +
+                      MemoryUtils.getRuntimeMemoryStats());
+        }
+        for (List<M> msgList : inMessages.values()) {
+            msgList.clear();
+        }
+        inMessages.clear();
+
+        Collection<Future<?>> futures = new ArrayList<Future<?>>();
+
+        // randomize peers in order to avoid hotspot on racks
+        List<PeerConnection> peerList =
+            new ArrayList<PeerConnection>(peerConnections.values());
+        Collections.shuffle(peerList);
+
+        for (PeerConnection pc : peerList) {
+            futures.add(executor.submit(new PeerFlushExecutor(pc, context)));
+        }
+
+        // wait for all flushes
+        for (Future<?> future : futures) {
+            try {
+                future.get();
+                context.progress();
+            } catch (InterruptedException e) {
+                throw new IllegalStateException("flush: Got IOException", e);
+            } catch (ExecutionException e) {
+                throw new IllegalStateException(
+                    "flush: Got ExecutionException", e);
+            }
+        }
+
+        if (LOG.isInfoEnabled()) {
+            LOG.info("flush: ended for superstep " +
+                      service.getSuperstep() + " " +
+                      MemoryUtils.getRuntimeMemoryStats());
+        }
+
+        long msgs = totalMsgsSentInSuperstep;
+        totalMsgsSentInSuperstep = 0;
+        return msgs;
+    }
+
+    @Override
+    public void prepareSuperstep() {
+        if (LOG.isInfoEnabled()) {
+            LOG.info("prepareSuperstep: Superstep " +
+                     service.getSuperstep() + " " +
+                     MemoryUtils.getRuntimeMemoryStats());
+        }
+        inPrepareSuperstep = true;
+
+        // Combine and put the transient messages into the inMessages.
+        synchronized(transientInMessages) {
+            for (Entry<I, List<M>> entry : transientInMessages.entrySet()) {
+                if (combiner != null) {
+                    try {
+                        Iterable<M> messages =
+                            combiner.combine(entry.getKey(),
+                                             entry.getValue());
+                        if (messages == null) {
+                            throw new IllegalStateException(
+                                    "prepareSuperstep: Combiner cannot " +
+                                    "return null");
+                        }
+                        if (Iterables.size(entry.getValue()) <
+                                Iterables.size(messages)) {
+                            throw new IllegalStateException(
+                                    "prepareSuperstep: The number of " +
+                                    "combined messages is " +
+                                    "required to be <= to the number of " +
+                                    "messages to be combined");
+                        }
+                        for (M msg: messages) {
+                            putMsg(entry.getKey(), msg);
+                        }
+                    } catch (IOException e) {
+                        // no actual IO -- should never happen
+                        throw new RuntimeException(e);
+                    }
+                } else {
+                    List<M> msgs = inMessages.get(entry.getKey());
+                    if (msgs == null) {
+                        msgs = new ArrayList<M>();
+                        inMessages.put(entry.getKey(), msgs);
+                    }
+                    msgs.addAll(entry.getValue());
+                }
+                entry.getValue().clear();
+            }
+            transientInMessages.clear();
+        }
+
+        if (inMessages.size() > 0) {
+            // Assign the messages to each destination vertex (getting rid of
+            // the old ones)
+            for (Partition<I, V, E, M> partition :
+                    service.getPartitionMap().values()) {
+                for (BasicVertex<I, V, E, M> vertex : partition.getVertices()) {
+                    List<M> msgList = inMessages.get(vertex.getVertexId());
+                    if (msgList != null) {
+                        if (LOG.isDebugEnabled()) {
+                            LOG.debug("prepareSuperstep: Assigning " +
+                                      msgList.size() +
+                                      " mgs to vertex index " + vertex);
+                        }
+                        for (M msg : msgList) {
+                            if (msg == null) {
+                                LOG.warn("prepareSuperstep: Null message " +
+                                         "in inMessages");
+                            }
+                        }
+                        service.assignMessagesToVertex(vertex, msgList);
+                        msgList.clear();
+                        if (inMessages.remove(vertex.getVertexId()) == null) {
+                            throw new IllegalStateException(
+                                "prepareSuperstep: Impossible to not remove " +
+                                vertex);
+                        }
+                    }
+                }
+            }
+        }
+
+        inPrepareSuperstep = false;
+
+        // Resolve what happens when messages are sent to non-existent vertices
+        // and vertices that have mutations.  Also make sure that the messages
+        // are being sent to the correct destination
+        Set<I> resolveVertexIndexSet = new TreeSet<I>();
+        if (inMessages.size() > 0) {
+            for (Entry<I, List<M>> entry : inMessages.entrySet()) {
+                if (service.getPartition(entry.getKey()) == null) {
+                    throw new IllegalStateException(
+                        "prepareSuperstep: Impossible that this worker " +
+                        service.getWorkerInfo() + " was sent " +
+                        entry.getValue().size() + " message(s) with " +
+                        "vertex id " + entry.getKey() +
+                        " when it does not own this partition.  It should " +
+                        "have gone to partition owner " +
+                        service.getVertexPartitionOwner(entry.getKey()) +
+                        ".  The partition owners are " +
+                        service.getPartitionOwners());
+                }
+                resolveVertexIndexSet.add(entry.getKey());
+            }
+        }
+        synchronized(inVertexMutationsMap) {
+            for (I vertexIndex : inVertexMutationsMap.keySet()) {
+                resolveVertexIndexSet.add(vertexIndex);
+            }
+        }
+
+        // Resolve all graph mutations
+        for (I vertexIndex : resolveVertexIndexSet) {
+            VertexResolver<I, V, E, M> vertexResolver =
+                BspUtils.createVertexResolver(
+                    conf, service.getGraphMapper().getGraphState());
+            BasicVertex<I, V, E, M> originalVertex =
+                service.getVertex(vertexIndex);
+            Iterable<M> messages = inMessages.get(vertexIndex);
+            if (originalVertex != null) {
+                messages = originalVertex.getMessages();
+            }
+            VertexMutations<I, V, E, M> vertexMutations =
+                inVertexMutationsMap.get(vertexIndex);
+            BasicVertex<I, V, E, M> vertex =
+                vertexResolver.resolve(vertexIndex,
+                                       originalVertex,
+                                       vertexMutations,
+                                       messages);
+            if (LOG.isDebugEnabled()) {
+                LOG.debug("prepareSuperstep: Resolved vertex index " +
+                          vertexIndex + " with original vertex " +
+                          originalVertex + ", returned vertex " + vertex +
+                          " on superstep " + service.getSuperstep() +
+                          " with mutations " +
+                          vertexMutations);
+            }
+
+            Partition<I, V, E, M> partition =
+                service.getPartition(vertexIndex);
+            if (partition == null) {
+                throw new IllegalStateException(
+                    "prepareSuperstep: No partition for index " + vertexIndex +
+                    " in " + service.getPartitionMap() + " should have been " +
+                    service.getVertexPartitionOwner(vertexIndex));
+            }
+            if (vertex != null) {
+                ((MutableVertex<I, V, E, M>) vertex).setVertexId(vertexIndex);
+                partition.putVertex((BasicVertex<I, V, E, M>) vertex);
+            } else if (originalVertex != null) {
+                partition.removeVertex(originalVertex.getVertexId());
+            }
+        }
+        synchronized(inVertexMutationsMap) {
+            inVertexMutationsMap.clear();
+        }
+    }
+
+    @Override
+    public void fixPartitionIdToSocketAddrMap() {
+        // 1. Fix all the cached inet addresses (remove all changed entries)
+        // 2. Connect to any new RPC servers
+        synchronized(partitionIndexAddressMap) {
+            for (PartitionOwner partitionOwner : service.getPartitionOwners()) {
+                InetSocketAddress address =
+                    partitionIndexAddressMap.get(
+                        partitionOwner.getPartitionId());
+               if (address != null &&
+                       (!address.getHostName().equals(
+                        partitionOwner.getWorkerInfo().getHostname()) ||
+                        address.getPort() !=
+                       partitionOwner.getWorkerInfo().getPort())) {
+                   if (LOG.isInfoEnabled()) {
+                       LOG.info("fixPartitionIdToSocketAddrMap: " +
+                                "Partition owner " +
+                                partitionOwner + " changed from " +
+                                address);
+                   }
+                   partitionIndexAddressMap.remove(
+                       partitionOwner.getPartitionId());
+               }
+            }
+        }
+        try {
+            connectAllRPCProxys(this.jobId, this.jobToken);
+        } catch (InterruptedException e) {
+            throw new RuntimeException(e);
+        } catch (IOException e) {
+            throw new RuntimeException(e);
+        }
+    }
+
+    @Override
+    public String getName() {
+        return myName;
+    }
+
+    @Override
+    public Map<Integer, List<BasicVertex<I, V, E, M>>> getInPartitionVertexMap() {
+        return inPartitionVertexMap;
+    }
+}
diff --git a/src/main/java/org/apache/giraph/comm/CommunicationsInterface.java b/src/main/java/org/apache/giraph/comm/CommunicationsInterface.java
new file mode 100644
index 0000000..347181b
--- /dev/null
+++ b/src/main/java/org/apache/giraph/comm/CommunicationsInterface.java
@@ -0,0 +1,134 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.comm;
+
+import java.io.IOException;
+
+import org.apache.giraph.graph.Edge;
+import org.apache.giraph.graph.BasicVertex;
+/*if_not[HADOOP]
+ else[HADOOP]*/
+import org.apache.giraph.hadoop.BspTokenSelector;
+import org.apache.hadoop.security.token.TokenInfo;
+/*end[HADOOP]*/
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.ipc.VersionedProtocol;
+
+/**
+ * Basic interface for communication between workers.
+ *
+ * @param <I extends Writable> vertex id
+ * @param <M extends Writable> message data
+ */
+@SuppressWarnings("rawtypes")
+/*if_not[HADOOP]
+ else[HADOOP]*/
+@TokenInfo(BspTokenSelector.class)
+/*end[HADOOP]*/
+public interface CommunicationsInterface<
+        I extends WritableComparable,
+        V extends Writable,
+        E extends Writable,
+        M extends Writable>
+        extends VersionedProtocol {
+
+    /**
+     * Interface Version History
+     *
+     * 0 - First Version
+     */
+    static final long versionID = 0L;
+
+    /**
+     * Adds incoming message.
+     *
+     * @param vertexIndex
+     * @param msg
+     * @throws IOException
+     */
+    void putMsg(I vertexIndex, M msg) throws IOException;
+
+    /**
+     * Adds incoming message list.
+     *
+     * @param vertexIndex Vertex index where the message are added
+     * @param msgList messages added
+     * @throws IOException
+     */
+    void putMsgList(I vertexIndex, MsgList<M> msgList) throws IOException;
+
+    /**
+     * Adds a list of vertex ids and their respective message lists.
+     *
+     * @param vertexIndex Vertex index where the message are added
+     * @param msgList messages added
+     * @throws IOException
+     */
+    void putVertexIdMessagesList(
+        VertexIdMessagesList<I, M> vertexIdMessagesList) throws IOException;
+
+    /**
+     * Adds vertex list (index, value, edges, etc.) to the appropriate worker.
+     *
+     * @param partitionId Partition id of the vertices to be added.
+     * @param vertexList List of vertices to add
+     */
+    void putVertexList(int partitionId,
+                       VertexList<I, V, E, M> vertexList) throws IOException;
+
+    /**
+     * Add an edge to a remote vertex
+     *
+     * @param vertexIndex Vertex index where the edge is added
+     * @param edge Edge to be added
+     * @throws IOException
+     */
+    void addEdge(I vertexIndex, Edge<I, E> edge) throws IOException;
+
+    /**
+     * Remove an edge on a remote vertex
+     *
+     * @param vertexIndex Vertex index where the edge is added
+     * @param destinationVertexIndex Edge vertex index to be removed
+     * @throws IOException
+     */
+    void removeEdge(I vertexIndex, I destinationVertexIndex) throws IOException;
+
+    /**
+     * Add a remote vertex
+     *
+     * @param vertex Vertex that will be added
+     * @throws IOException
+     */
+    void addVertex(BasicVertex<I, V, E, M> vertex) throws IOException;
+
+    /**
+     * Removed a remote vertex
+     *
+     * @param vertexIndex Vertex index representing vertex to be removed
+     * @throws IOException
+     */
+    void removeVertex(I vertexIndex) throws IOException;
+
+    /**
+     * @return The name of this worker in the format "hostname:port".
+     */
+    String getName();
+}
diff --git a/src/main/java/org/apache/giraph/comm/MsgList.java b/src/main/java/org/apache/giraph/comm/MsgList.java
new file mode 100644
index 0000000..69579b0
--- /dev/null
+++ b/src/main/java/org/apache/giraph/comm/MsgList.java
@@ -0,0 +1,48 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.comm;
+
+import org.apache.giraph.graph.BspUtils;
+import org.apache.hadoop.io.Writable;
+
+/**
+ * Wrapper around {@link ArrayListWritable} that allows the message class to
+ * be set prior to calling readFields().
+ *
+ * @param <M> message type
+ */
+public class MsgList<M extends Writable>
+    extends ArrayListWritable<M> {
+    /** Defining a layout version for a serializable class. */
+    private static final long serialVersionUID = 100L;
+
+    public MsgList() {
+        super();
+    }
+    
+    public MsgList(MsgList<M> msgList) {
+        super(msgList);
+    }
+
+    @SuppressWarnings("unchecked")
+    @Override
+    public void setClass() {
+        setClass((Class<M>) BspUtils.getMessageValueClass(getConf()));
+    }
+}
diff --git a/src/main/java/org/apache/giraph/comm/RPCCommunications.java b/src/main/java/org/apache/giraph/comm/RPCCommunications.java
new file mode 100644
index 0000000..152bbfa
--- /dev/null
+++ b/src/main/java/org/apache/giraph/comm/RPCCommunications.java
@@ -0,0 +1,166 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.comm;
+
+import java.io.IOException;
+
+import java.net.InetSocketAddress;
+import java.net.UnknownHostException;
+
+/*if_not[HADOOP]
+else[HADOOP]*/
+import java.security.PrivilegedExceptionAction;
+import org.apache.hadoop.mapreduce.security.TokenCache;
+import org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier;
+import org.apache.hadoop.mapreduce.security.token.JobTokenSecretManager;
+import org.apache.hadoop.security.Credentials;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.security.authorize.ServiceAuthorizationManager;
+import org.apache.hadoop.security.token.Token;
+/*end[HADOOP]*/
+
+import org.apache.log4j.Logger;
+
+import org.apache.giraph.bsp.CentralizedServiceWorker;
+import org.apache.giraph.graph.GraphState;
+import org.apache.giraph.hadoop.BspPolicyProvider;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.ipc.RPC;
+import org.apache.hadoop.ipc.RPC.Server;
+import org.apache.hadoop.mapreduce.Mapper;
+
+@SuppressWarnings("rawtypes")
+public class RPCCommunications<
+        I extends WritableComparable,
+        V extends Writable,
+        E extends Writable,
+        M extends Writable>
+/*if_not[HADOOP]
+extends BasicRPCCommunications<I, V, E, M, Object> {
+else[HADOOP]*/
+        extends BasicRPCCommunications<I, V, E, M, Token<JobTokenIdentifier>> {
+/*end[HADOOP]*/
+
+    /** Class logger */
+    public static final Logger LOG = Logger.getLogger(RPCCommunications.class);
+
+    public RPCCommunications(Mapper<?, ?, ?, ?>.Context context,
+                             CentralizedServiceWorker<I, V, E, M> service,
+                             GraphState<I, V, E, M> graphState)
+            throws IOException, UnknownHostException, InterruptedException {
+        super(context, service);
+    }
+
+/*if_not[HADOOP]
+    protected Object createJobToken() throws IOException {
+        return null;
+    }
+else[HADOOP]*/
+    protected Token<JobTokenIdentifier> createJobToken() throws IOException {
+        String localJobTokenFile = System.getenv().get(
+                UserGroupInformation.HADOOP_TOKEN_FILE_LOCATION);
+        if (localJobTokenFile != null) {
+            Credentials credentials =
+                TokenCache.loadTokens(localJobTokenFile, conf);
+            return TokenCache.getJobToken(credentials);
+        }
+        return null;
+    }
+/*end[HADOOP]*/
+
+    protected Server getRPCServer(
+            InetSocketAddress myAddress, int numHandlers, String jobId,
+/*if_not[HADOOP]
+            Object jt) throws IOException {
+        return RPC.getServer(this, myAddress.getHostName(), myAddress.getPort(),
+            numHandlers, false, conf);
+    }
+else[HADOOP]*/
+            Token<JobTokenIdentifier> jt) throws IOException {
+        @SuppressWarnings("deprecation")
+        String hadoopSecurityAuthorization =
+            ServiceAuthorizationManager.SERVICE_AUTHORIZATION_CONFIG;
+        if (conf.getBoolean(
+                    hadoopSecurityAuthorization,
+                    false)) {
+            ServiceAuthorizationManager.refresh(conf, new BspPolicyProvider());
+        }
+        JobTokenSecretManager jobTokenSecretManager =
+            new JobTokenSecretManager();
+        if (jt != null) { //could be null in the case of some unit tests
+            jobTokenSecretManager.addTokenForJob(jobId, jt);
+            if (LOG.isInfoEnabled()) {
+                LOG.info("getRPCServer: Added jobToken " + jt);
+            }
+        }
+        return RPC.getServer(this, myAddress.getHostName(), myAddress.getPort(),
+                numHandlers, false, conf, jobTokenSecretManager);
+    }
+/*end[HADOOP]*/
+
+    protected CommunicationsInterface<I, V, E, M> getRPCProxy(
+            final InetSocketAddress addr,
+            String jobId,
+/*if_not[HADOOP]
+            Object jt)
+else[HADOOP]*/
+            Token<JobTokenIdentifier> jt)
+/*end[HADOOP]*/
+            throws IOException, InterruptedException {
+        final Configuration config = new Configuration(conf);
+
+/*if_not[HADOOP]
+        @SuppressWarnings("unchecked")
+        CommunicationsInterface<I, V, E, M> proxy =
+            (CommunicationsInterface<I, V, E, M>)RPC.getProxy(
+                 CommunicationsInterface.class, versionID, addr, config);
+        return proxy;
+else[HADOOP]*/
+        if (jt == null) {
+            @SuppressWarnings("unchecked")
+            CommunicationsInterface<I, V, E, M> proxy =
+                (CommunicationsInterface<I, V, E, M>)RPC.getProxy(
+                     CommunicationsInterface.class, versionID, addr, config);
+            return proxy;
+        }
+        jt.setService(new Text(addr.getAddress().getHostAddress() + ":"
+                               + addr.getPort()));
+        UserGroupInformation current = UserGroupInformation.getCurrentUser();
+        current.addToken(jt);
+        UserGroupInformation owner =
+            UserGroupInformation.createRemoteUser(jobId);
+        owner.addToken(jt);
+        @SuppressWarnings("unchecked")
+        CommunicationsInterface<I, V, E, M> proxy =
+                      owner.doAs(new PrivilegedExceptionAction<
+                              CommunicationsInterface<I, V, E, M>>() {
+            @Override
+            public CommunicationsInterface<I, V, E, M> run() throws Exception {
+                // All methods in CommunicationsInterface will be used for RPC
+                return (CommunicationsInterface<I, V, E, M> )RPC.getProxy(
+                    CommunicationsInterface.class, versionID, addr, config);
+            }
+        });
+        return proxy;
+/*end[HADOOP]*/
+    }
+}
diff --git a/src/main/java/org/apache/giraph/comm/ServerInterface.java b/src/main/java/org/apache/giraph/comm/ServerInterface.java
new file mode 100644
index 0000000..9ba95d1
--- /dev/null
+++ b/src/main/java/org/apache/giraph/comm/ServerInterface.java
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.comm;
+
+import java.io.Closeable;
+import java.io.IOException;
+
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.Mapper;
+
+/**
+ * Interface for message communication server
+ */
+@SuppressWarnings("rawtypes")
+public interface ServerInterface<I extends WritableComparable,
+                                 V extends Writable,
+                                 E extends Writable,
+                                 M extends Writable>
+                                 extends Closeable,
+                                 WorkerCommunications<I, V, E, M> {
+    /**
+     *  Setup the server.
+     */
+    void setup();
+
+    /**
+     * Move the in transition messages to the in messages for every vertex and
+     * add new connections to any newly appearing RPC proxies.
+     */
+    void prepareSuperstep();
+
+    /**
+     * Flush all outgoing messages.  This will synchronously ensure that all
+     * messages have been send and delivered prior to returning.
+     *
+     * @param context Context used to signal process
+     * @return Number of messages sent during the last superstep
+     * @throws IOException
+     */
+    long flush(Mapper<?, ?, ?, ?>.Context context) throws IOException;
+
+    /**
+     * Closes all connections.
+     *
+     * @throws IOException
+     */
+    void closeConnections() throws IOException;
+
+    /**
+     * Shuts down.
+     */
+    void close();
+}
diff --git a/src/main/java/org/apache/giraph/comm/VertexIdMessages.java b/src/main/java/org/apache/giraph/comm/VertexIdMessages.java
new file mode 100644
index 0000000..09380f6
--- /dev/null
+++ b/src/main/java/org/apache/giraph/comm/VertexIdMessages.java
@@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.comm;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.giraph.graph.BspUtils;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * This object is only used for transporting list of vertices and their
+ * respective messages to a destination RPC server.
+ *
+ * @param <I extends Writable> vertex id
+ * @param <M extends Writable> message data
+ */
+@SuppressWarnings("rawtypes")
+public class VertexIdMessages<I extends WritableComparable, M extends Writable>
+        implements Writable, Configurable {
+    /** Vertex id */
+    private I vertexId;
+    /** Message list corresponding to vertex id */
+    private MsgList<M> msgList;
+    /** Configuration from Configurable */
+    private Configuration conf;
+
+    /**
+     * Reflective constructor.
+     */
+    public VertexIdMessages() {}
+
+    /**
+     * Constructor used with creating initial values.
+     *
+     * @param vertexId Vertex id to be sent
+     * @param msgList Mesage list for the vertex id to be sent
+     */
+    public VertexIdMessages(I vertexId, MsgList<M> msgList) {
+        this.vertexId = vertexId;
+        this.msgList = msgList;
+    }
+
+    @Override
+    public void readFields(DataInput input) throws IOException {
+        vertexId = BspUtils.<I>createVertexIndex(getConf());
+        vertexId.readFields(input);
+        msgList = new MsgList<M>();
+        msgList.setConf(getConf());
+        msgList.readFields(input);
+    }
+
+    @Override
+    public void write(DataOutput output) throws IOException {
+        vertexId.write(output);
+        msgList.write(output);
+    }
+
+    @Override
+    public Configuration getConf() {
+        return conf;
+    }
+
+    @Override
+    public void setConf(Configuration conf) {
+        this.conf = conf;
+    }
+
+    public I getVertexId() {
+        return vertexId;
+    }
+
+    public MsgList<M> getMessageList() {
+        return msgList;
+    }
+ }
diff --git a/src/main/java/org/apache/giraph/comm/VertexIdMessagesList.java b/src/main/java/org/apache/giraph/comm/VertexIdMessagesList.java
new file mode 100644
index 0000000..d75578c
--- /dev/null
+++ b/src/main/java/org/apache/giraph/comm/VertexIdMessagesList.java
@@ -0,0 +1,51 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.comm;
+
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * Wrapper around {@link ArrayListWritable} that provides the list for
+ * {@link VertexIdMessage}.
+ *
+ * @param <I extends Writable> vertex id
+ * @param <M extends Writable> message data
+ */
+@SuppressWarnings("rawtypes")
+public class VertexIdMessagesList<I extends WritableComparable,
+        M extends Writable> extends ArrayListWritable<VertexIdMessages<I, M>> {
+    /** Defining a layout version for a serializable class. */
+    private static final long serialVersionUID = 100L;
+
+    public VertexIdMessagesList() {
+        super();
+    }
+
+    public VertexIdMessagesList(VertexIdMessagesList<I, M> vertexIdMessagesList) {
+        super(vertexIdMessagesList);
+    }
+
+    @SuppressWarnings("unchecked")
+    @Override
+    public void setClass() {
+        setClass((Class<VertexIdMessages<I, M>>)
+                 (new VertexIdMessages<I, M>()).getClass());
+    }
+}
diff --git a/src/main/java/org/apache/giraph/comm/VertexList.java b/src/main/java/org/apache/giraph/comm/VertexList.java
new file mode 100644
index 0000000..2e7e249
--- /dev/null
+++ b/src/main/java/org/apache/giraph/comm/VertexList.java
@@ -0,0 +1,56 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.comm;
+
+import org.apache.giraph.graph.BasicVertex;
+import org.apache.giraph.graph.BspUtils;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * Wrapper around {@link ArrayListWritable} that allows the vertex
+ * class to be set prior to calling the default constructor.
+ *
+ * @param <I> Vertex index value
+ * @param <V> Vertex value
+ * @param <E> Edge value
+ * @param <M> Message value
+ */
+@SuppressWarnings("rawtypes")
+public class VertexList<
+        I extends WritableComparable,
+        V extends Writable,
+        E extends Writable,
+        M extends Writable>
+        extends ArrayListWritable<BasicVertex<I, V, E, M>> {
+    /** Defining a layout version for a serializable class. */
+    private static final long serialVersionUID = 1000L;
+
+    /**
+     * Default constructor for reflection
+     */
+    public VertexList() {}
+
+    @SuppressWarnings("unchecked")
+    @Override
+    public void setClass() {
+        setClass((Class<BasicVertex<I, V, E, M>>)
+                 BspUtils.<I, V, E, M>getVertexClass(getConf()));
+    }
+}
diff --git a/src/main/java/org/apache/giraph/comm/WorkerCommunications.java b/src/main/java/org/apache/giraph/comm/WorkerCommunications.java
new file mode 100644
index 0000000..0abbc2f
--- /dev/null
+++ b/src/main/java/org/apache/giraph/comm/WorkerCommunications.java
@@ -0,0 +1,112 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.comm;
+
+import org.apache.giraph.graph.BasicVertex;
+import org.apache.giraph.graph.Edge;
+import org.apache.giraph.graph.WorkerInfo;
+import org.apache.giraph.graph.partition.Partition;
+
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Public interface for workers to do message communication
+ *
+ * @param <I extends Writable> vertex id
+ * @param <V extends Writable> vertex value
+ * @param <E extends Writable> edge value
+ * @param <M extends Writable> message data
+ */
+@SuppressWarnings("rawtypes")
+public interface WorkerCommunications<I extends WritableComparable,
+                                      V extends Writable,
+                                      E extends Writable,
+                                      M extends Writable> {
+    /**
+     * Fix changes to the workers and the mapping between partitions and
+     * workers.
+     */
+    void fixPartitionIdToSocketAddrMap();
+
+    /**
+     * Sends a message to destination vertex.
+     *
+     * @param id
+     * @param msg
+     */
+    void sendMessageReq(I id, M msg);
+
+    /**
+     * Sends a partition to the appropriate partition owner
+     *
+     * @param workerInfo Owner the vertices belong to
+     * @param partition Partition to send
+     */
+    void sendPartitionReq(WorkerInfo workerInfo,
+                          Partition<I, V, E, M> partition);
+
+    /**
+     * Sends a request to the appropriate vertex range owner to add an edge
+     *
+     * @param vertexIndex Index of the vertex to get the request
+     * @param edge Edge to be added
+     * @throws IOException
+     */
+    void addEdgeReq(I vertexIndex, Edge<I, E> edge) throws IOException;
+
+    /**
+     * Sends a request to the appropriate vertex range owner to remove an edge
+     *
+     * @param vertexIndex Index of the vertex to get the request
+     * @param destinationVertexIndex Index of the edge to be removed
+     * @throws IOException
+     */
+    void removeEdgeReq(I vertexIndex, I destinationVertexIndex)
+        throws IOException;
+
+    /**
+     * Sends a request to the appropriate vertex range owner to add a vertex
+     *
+     * @param vertex Vertex to be added
+     * @throws IOException
+     */
+    void addVertexReq(BasicVertex<I, V, E, M> vertex) throws IOException;
+
+    /**
+     * Sends a request to the appropriate vertex range owner to remove a vertex
+     *
+     * @param vertexIndex Index of the vertex to be removed
+     * @throws IOException
+     */
+    void removeVertexReq(I vertexIndex) throws IOException;
+
+    /**
+     * Get the vertices that were sent in the last iteration.  After getting
+     * the map, the user should synchronize with it to insure it
+     * is thread-safe.
+     *
+     * @return map of vertex ranges to vertices
+     */
+    Map<Integer, List<BasicVertex<I, V, E, M>>> getInPartitionVertexMap();
+}
diff --git a/src/main/java/org/apache/giraph/examples/ConnectedComponentsVertex.java b/src/main/java/org/apache/giraph/examples/ConnectedComponentsVertex.java
new file mode 100644
index 0000000..5f26c65
--- /dev/null
+++ b/src/main/java/org/apache/giraph/examples/ConnectedComponentsVertex.java
@@ -0,0 +1,96 @@
+/*
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.giraph.examples;
+
+import org.apache.giraph.graph.IntIntNullIntVertex;
+import org.apache.hadoop.io.IntWritable;
+
+import java.io.IOException;
+import java.util.Iterator;
+
+/**
+ * Implementation of the HCC algorithm that identifies connected components and assigns each
+ * vertex its "component identifier" (the smallest vertex id in the component)
+ *
+ * The idea behind the algorithm is very simple: propagate the smallest vertex id along the
+ * edges to all vertices of a connected component. The number of supersteps necessary is
+ * equal to the length of the maximum diameter of all components + 1
+ *
+ * The original Hadoop-based variant of this algorithm was proposed by Kang, Charalampos
+ * Tsourakakis and Faloutsos in "PEGASUS: Mining Peta-Scale Graphs", 2010
+ *
+ * http://www.cs.cmu.edu/~ukang/papers/PegasusKAIS.pdf
+ */
+public class ConnectedComponentsVertex extends IntIntNullIntVertex {
+
+    /**
+     * Propagates the smallest vertex id to all neighbors. Will always choose to halt and only
+     * reactivate if a smaller id has been sent to it.
+     *
+     * @param messages
+     * @throws IOException
+     */
+    @Override
+    public void compute(Iterator<IntWritable> messages) throws IOException {
+
+        int currentComponent = getVertexValue().get();
+
+        // first superstep is special, because we can simply look at the neighbors
+        if (getSuperstep() == 0) {
+            for (Iterator<IntWritable> edges = iterator(); edges.hasNext();) {
+                int neighbor = edges.next().get();
+                if (neighbor < currentComponent) {
+                    currentComponent = neighbor;
+                }
+            }
+            // only need to send value if it is not the own id
+            if (currentComponent != getVertexValue().get()) {
+                setVertexValue(new IntWritable(currentComponent));
+                for (Iterator<IntWritable> edges = iterator();
+                        edges.hasNext();) {
+                    int neighbor = edges.next().get();
+                    if (neighbor > currentComponent) {
+                        sendMsg(new IntWritable(neighbor), getVertexValue());
+                    }
+                }
+            }
+
+            voteToHalt();
+            return;
+        }
+
+        boolean changed = false;
+        // did we get a smaller id ?
+        while (messages.hasNext()) {
+            int candidateComponent = messages.next().get();
+            if (candidateComponent < currentComponent) {
+                currentComponent = candidateComponent;
+                changed = true;
+            }
+        }
+
+        // propagate new component id to the neighbors
+        if (changed) {
+            setVertexValue(new IntWritable(currentComponent));
+            sendMsgToAllEdges(getVertexValue());
+        }
+        voteToHalt();
+    }
+
+}
diff --git a/src/main/java/org/apache/giraph/examples/GeneratedVertexInputFormat.java b/src/main/java/org/apache/giraph/examples/GeneratedVertexInputFormat.java
new file mode 100644
index 0000000..983b5c3
--- /dev/null
+++ b/src/main/java/org/apache/giraph/examples/GeneratedVertexInputFormat.java
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.examples;
+
+import org.apache.giraph.bsp.BspInputSplit;
+import org.apache.giraph.graph.VertexInputFormat;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.JobContext;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * This VertexInputFormat is meant for testing/debugging.  It simply generates
+ * some vertex data that can be consumed by test applications.
+ */
+@SuppressWarnings("rawtypes")
+public abstract class GeneratedVertexInputFormat<
+        I extends WritableComparable, V extends Writable, E extends Writable,
+        M extends Writable>
+        extends VertexInputFormat<I, V, E, M> {
+
+    @Override
+    public List<InputSplit> getSplits(JobContext context, int numWorkers)
+        throws IOException, InterruptedException {
+         // This is meaningless, the VertexReader will generate all the test
+         // data.
+        List<InputSplit> inputSplitList = new ArrayList<InputSplit>();
+        for (int i = 0; i < numWorkers; ++i) {
+            inputSplitList.add(new BspInputSplit(i, numWorkers));
+        }
+        return inputSplitList;
+    }
+}
diff --git a/src/main/java/org/apache/giraph/examples/GeneratedVertexReader.java b/src/main/java/org/apache/giraph/examples/GeneratedVertexReader.java
new file mode 100644
index 0000000..83f98f1
--- /dev/null
+++ b/src/main/java/org/apache/giraph/examples/GeneratedVertexReader.java
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.examples;
+
+import org.apache.giraph.bsp.BspInputSplit;
+import org.apache.giraph.graph.VertexReader;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+
+import java.io.IOException;
+
+/**
+ * Used by GeneratedVertexInputFormat to read some generated data
+ *
+ * @param <I> Vertex index value
+ * @param <V> Vertex value
+ * @param <E> Edge value
+ */
+@SuppressWarnings("rawtypes")
+public abstract class GeneratedVertexReader<
+        I extends WritableComparable, V extends Writable, E extends Writable,
+        M extends Writable>
+        implements VertexReader<I, V, E, M> {
+    /** Records read so far */
+    protected long recordsRead = 0;
+    /** Total records to read (on this split alone) */
+    protected long totalRecords = 0;
+    /** The input split from initialize(). */
+    protected BspInputSplit inputSplit = null;
+    /** Reverse the id order? */
+    protected boolean reverseIdOrder;
+
+    protected Configuration configuration = null;
+
+    public static final String READER_VERTICES =
+        "GeneratedVertexReader.reader_vertices";
+    public static final long DEFAULT_READER_VERTICES = 10;
+    public static final String REVERSE_ID_ORDER =
+        "GeneratedVertexReader.reverseIdOrder";
+    public static final boolean DEAFULT_REVERSE_ID_ORDER = false;
+
+    public GeneratedVertexReader() {
+    }
+
+    @Override
+    final public void initialize(InputSplit inputSplit,
+                                 TaskAttemptContext context)
+            throws IOException {
+        configuration = context.getConfiguration();
+        totalRecords = configuration.getLong(
+            GeneratedVertexReader.READER_VERTICES,
+            GeneratedVertexReader.DEFAULT_READER_VERTICES);
+        reverseIdOrder = configuration.getBoolean(
+            GeneratedVertexReader.REVERSE_ID_ORDER,
+            GeneratedVertexReader.DEAFULT_REVERSE_ID_ORDER);
+        this.inputSplit = (BspInputSplit) inputSplit;
+    }
+
+    @Override
+    public void close() throws IOException {
+    }
+
+    @Override
+    final public float getProgress() throws IOException {
+        return recordsRead * 100.0f / totalRecords;
+    }
+}
diff --git a/src/main/java/org/apache/giraph/examples/IntIntNullIntTextInputFormat.java b/src/main/java/org/apache/giraph/examples/IntIntNullIntTextInputFormat.java
new file mode 100644
index 0000000..d6edfc0
--- /dev/null
+++ b/src/main/java/org/apache/giraph/examples/IntIntNullIntTextInputFormat.java
@@ -0,0 +1,97 @@
+/*
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.giraph.examples;
+
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.giraph.graph.BasicVertex;
+import org.apache.giraph.graph.BspUtils;
+import org.apache.giraph.graph.VertexReader;
+import org.apache.giraph.lib.TextVertexInputFormat;
+import org.apache.hadoop.io.IntWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.RecordReader;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+
+import java.io.IOException;
+import java.util.Map;
+import java.util.regex.Pattern;
+
+/**
+ * Simple text-based {@link org.apache.giraph.graph.VertexInputFormat} for unweighted
+ * graphs with int ids.
+ *
+ * Each line consists of: vertex neighbor1 neighbor2 ...
+ */
+public class IntIntNullIntTextInputFormat extends
+        TextVertexInputFormat<IntWritable, IntWritable, NullWritable,
+        IntWritable> {
+
+    @Override
+    public VertexReader<IntWritable, IntWritable, NullWritable, IntWritable>
+    createVertexReader(InputSplit split, TaskAttemptContext context)
+            throws IOException {
+        return new IntIntNullIntVertexReader(
+                textInputFormat.createRecordReader(split, context));
+    }
+
+    public static class IntIntNullIntVertexReader extends
+            TextVertexInputFormat.TextVertexReader<IntWritable, IntWritable,
+                    NullWritable, IntWritable> {
+
+        private static final Pattern SEPARATOR = Pattern.compile("[\t ]");
+
+        public IntIntNullIntVertexReader(RecordReader<LongWritable, Text>
+                lineReader) {
+            super(lineReader);
+        }
+
+        @Override
+        public BasicVertex<IntWritable, IntWritable, NullWritable, IntWritable>
+                getCurrentVertex() throws IOException, InterruptedException {
+            BasicVertex<IntWritable, IntWritable, NullWritable, IntWritable>
+                    vertex = BspUtils.<IntWritable, IntWritable, NullWritable,
+                    IntWritable>createVertex(getContext().getConfiguration());
+
+            String[] tokens = SEPARATOR.split(getRecordReader()
+                    .getCurrentValue().toString());
+            Map<IntWritable, NullWritable> edges =
+                    Maps.newHashMapWithExpectedSize(tokens.length - 1);
+            for (int n = 1; n < tokens.length; n++) {
+                edges.put(new IntWritable(Integer.parseInt(tokens[n])),
+                        NullWritable.get());
+            }
+
+            IntWritable vertexId = new IntWritable(Integer.parseInt(tokens[0]));
+            vertex.initialize(vertexId, vertexId, edges,
+                    Lists.<IntWritable>newArrayList());
+
+            return vertex;
+        }
+
+        @Override
+        public boolean nextVertex() throws IOException, InterruptedException {
+            return getRecordReader().nextKeyValue();
+        }
+    }
+
+}
diff --git a/src/main/java/org/apache/giraph/examples/LongSumAggregator.java b/src/main/java/org/apache/giraph/examples/LongSumAggregator.java
new file mode 100644
index 0000000..c0811d2
--- /dev/null
+++ b/src/main/java/org/apache/giraph/examples/LongSumAggregator.java
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.examples;
+
+import org.apache.hadoop.io.LongWritable;
+
+import org.apache.giraph.graph.Aggregator;
+
+/**
+ * Aggregator for summing up values.
+ */
+public class LongSumAggregator implements Aggregator<LongWritable> {
+    /** Internal sum */
+    private long sum = 0;
+
+    public void aggregate(long value) {
+        sum += value;
+    }
+
+    @Override
+    public void aggregate(LongWritable value) {
+        sum += value.get();
+    }
+
+    @Override
+    public void setAggregatedValue(LongWritable value) {
+        sum = value.get();
+    }
+
+    @Override
+    public LongWritable getAggregatedValue() {
+        return new LongWritable(sum);
+    }
+
+    @Override
+    public LongWritable createAggregatedValue() {
+        return new LongWritable();
+    }
+}
diff --git a/src/main/java/org/apache/giraph/examples/MaxAggregator.java b/src/main/java/org/apache/giraph/examples/MaxAggregator.java
new file mode 100644
index 0000000..4e7a9f3
--- /dev/null
+++ b/src/main/java/org/apache/giraph/examples/MaxAggregator.java
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.examples;
+
+import org.apache.hadoop.io.DoubleWritable;
+
+import org.apache.giraph.graph.Aggregator;
+
+/**
+ * Aggregator for getting max value.
+ *
+ **/
+
+public class MaxAggregator implements Aggregator<DoubleWritable> {
+
+  private double max = Double.MIN_VALUE;
+
+  public void aggregate(DoubleWritable value) {
+      double val = value.get();
+      if (val > max) {
+          max = val;
+      }
+  }
+
+  public void setAggregatedValue(DoubleWritable value) {
+      max = value.get();
+  }
+
+  public DoubleWritable getAggregatedValue() {
+      return new DoubleWritable(max);
+  }
+
+  public DoubleWritable createAggregatedValue() {
+      return new DoubleWritable();
+  }
+
+}
diff --git a/src/main/java/org/apache/giraph/examples/MinAggregator.java b/src/main/java/org/apache/giraph/examples/MinAggregator.java
new file mode 100644
index 0000000..1714c94
--- /dev/null
+++ b/src/main/java/org/apache/giraph/examples/MinAggregator.java
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.examples;
+
+import org.apache.hadoop.io.DoubleWritable;
+
+import org.apache.giraph.graph.Aggregator;
+
+/**
+ * Aggregator for getting min value.
+ *
+ **/
+
+public class MinAggregator implements Aggregator<DoubleWritable> {
+
+  private double min = Double.MAX_VALUE;
+
+  public void aggregate(DoubleWritable value) {
+      double val = value.get();
+      if (val < min) {
+          min = val;
+      }
+  }
+
+  public void setAggregatedValue(DoubleWritable value) {
+      min = value.get();
+  }
+
+  public DoubleWritable getAggregatedValue() {
+      return new DoubleWritable(min);
+  }
+
+  public DoubleWritable createAggregatedValue() {
+      return new DoubleWritable();
+  }
+
+}
diff --git a/src/main/java/org/apache/giraph/examples/MinimumIntCombiner.java b/src/main/java/org/apache/giraph/examples/MinimumIntCombiner.java
new file mode 100644
index 0000000..1758388
--- /dev/null
+++ b/src/main/java/org/apache/giraph/examples/MinimumIntCombiner.java
@@ -0,0 +1,48 @@
+/*
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.giraph.examples;
+
+import org.apache.giraph.graph.VertexCombiner;
+import org.apache.hadoop.io.IntWritable;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * {@link VertexCombiner} that finds the minimum {@link IntWritable}
+ */
+public class MinimumIntCombiner
+        extends VertexCombiner<IntWritable, IntWritable> {
+
+    @Override
+    public Iterable<IntWritable> combine(IntWritable target,
+    		Iterable<IntWritable> messages) throws IOException {
+        int minimum = Integer.MAX_VALUE;
+        for (IntWritable message : messages) {
+            if (message.get() < minimum) {
+                minimum = message.get();
+            }
+        }
+        List<IntWritable> value = new ArrayList<IntWritable>();
+        value.add(new IntWritable(minimum));
+        
+        return value;
+    }
+}
diff --git a/src/main/java/org/apache/giraph/examples/SimpleAggregatorWriter.java b/src/main/java/org/apache/giraph/examples/SimpleAggregatorWriter.java
new file mode 100644
index 0000000..afdcb27
--- /dev/null
+++ b/src/main/java/org/apache/giraph/examples/SimpleAggregatorWriter.java
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.examples;
+
+import java.io.IOException;
+import java.util.Map;
+import java.util.Map.Entry;
+
+import org.apache.giraph.graph.Aggregator;
+import org.apache.giraph.graph.AggregatorWriter;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.mapreduce.Mapper.Context;
+
+/**
+ * This is a simple example for an aggregator writer. After each superstep
+ * the writer will persist the aggregator values to disk, by use of the
+ * Writable interface. The file will be created on the current working
+ * directory.
+ */
+public class SimpleAggregatorWriter implements AggregatorWriter {
+    /** the name of the file we wrote to */
+    public static String filename;
+    private FSDataOutputStream output;
+    
+    @SuppressWarnings("rawtypes")
+    @Override
+    public void initialize(Context context, long applicationAttempt)
+            throws IOException {
+        filename = "aggregatedValues_"+applicationAttempt;
+        Path p = new Path(filename);
+        FileSystem fs = FileSystem.get(context.getConfiguration());
+        output = fs.create(p, true);
+    }
+
+    @Override
+    public void writeAggregator(Map<String, Aggregator<Writable>> map,
+            long superstep) throws IOException {
+
+        for (Entry<String, Aggregator<Writable>> aggregator: map.entrySet()) {
+            aggregator.getValue().getAggregatedValue().write(output);
+        }
+        output.flush();
+    }
+
+    @Override
+    public void close() throws IOException {
+        output.close();
+    }
+}
diff --git a/src/main/java/org/apache/giraph/examples/SimpleCheckpointVertex.java b/src/main/java/org/apache/giraph/examples/SimpleCheckpointVertex.java
new file mode 100644
index 0000000..59d8bdc
--- /dev/null
+++ b/src/main/java/org/apache/giraph/examples/SimpleCheckpointVertex.java
@@ -0,0 +1,245 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.examples;
+
+import org.apache.commons.cli.*;
+import org.apache.giraph.graph.*;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.FloatWritable;
+import org.apache.hadoop.io.IntWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.log4j.Logger;
+
+import java.util.Iterator;
+
+/**
+ * An example that simply uses its id, value, and edges to compute new data
+ * every iteration to verify that checkpoint restarting works.  Fault injection
+ * can also test automated checkpoint restarts.
+ */
+public class SimpleCheckpointVertex extends
+        EdgeListVertex<LongWritable, IntWritable, FloatWritable, FloatWritable>
+        implements Tool {
+    private static Logger LOG =
+        Logger.getLogger(SimpleCheckpointVertex.class);
+    /** Configuration */
+    private Configuration conf;
+    /** Which superstep to cause the worker to fail */
+    public final int faultingSuperstep = 4;
+    /** Vertex id to fault on */
+    public final long faultingVertexId = 1;
+    /** Dynamically set number of supersteps */
+    public static final String SUPERSTEP_COUNT =
+        "simpleCheckpointVertex.superstepCount";
+    /** Should fault? */
+    public static final String ENABLE_FAULT=
+        "simpleCheckpointVertex.enableFault";
+
+    @Override
+    public void compute(Iterator<FloatWritable> msgIterator) {
+    	SimpleCheckpointVertexWorkerContext workerContext =
+    		(SimpleCheckpointVertexWorkerContext) getWorkerContext();
+
+        LongSumAggregator sumAggregator = (LongSumAggregator)
+            getAggregator(LongSumAggregator.class.getName());
+
+        boolean enableFault = workerContext.getEnableFault();
+        int supersteps = workerContext.getSupersteps();
+
+        if (enableFault && (getSuperstep() == faultingSuperstep) &&
+                (getContext().getTaskAttemptID().getId() == 0) &&
+                (getVertexId().get() == faultingVertexId)) {
+            System.out.println("compute: Forced a fault on the first " +
+                               "attempt of superstep " +
+                               faultingSuperstep + " and vertex id " +
+                               faultingVertexId);
+            System.exit(-1);
+        }
+        if (getSuperstep() > supersteps) {
+            voteToHalt();
+            return;
+        }
+        System.out.println("compute: " + sumAggregator);
+        sumAggregator.aggregate(getVertexId().get());
+        System.out.println("compute: sum = " +
+                           sumAggregator.getAggregatedValue().get() +
+                           " for vertex " + getVertexId());
+        float msgValue = 0.0f;
+        while (msgIterator.hasNext()) {
+            float curMsgValue = msgIterator.next().get();
+            msgValue += curMsgValue;
+            System.out.println("compute: got msgValue = " + curMsgValue +
+                               " for vertex " + getVertexId() +
+                               " on superstep " + getSuperstep());
+        }
+        int vertexValue = getVertexValue().get();
+        setVertexValue(new IntWritable(vertexValue + (int) msgValue));
+        System.out.println("compute: vertex " + getVertexId() +
+                           " has value " + getVertexValue() +
+                           " on superstep " + getSuperstep());
+        for (LongWritable targetVertexId : this) {
+            FloatWritable edgeValue = getEdgeValue(targetVertexId);
+            System.out.println("compute: vertex " + getVertexId() +
+                               " sending edgeValue " + edgeValue +
+                               " vertexValue " + vertexValue +
+                               " total " + (edgeValue.get() +
+                               (float) vertexValue) +
+                               " to vertex " + targetVertexId +
+                               " on superstep " + getSuperstep());
+            edgeValue.set(edgeValue.get() + (float) vertexValue);
+            addEdge(targetVertexId, edgeValue);
+            sendMsg(targetVertexId, new FloatWritable(edgeValue.get()));
+        }
+    }
+
+    public static class SimpleCheckpointVertexWorkerContext
+            extends WorkerContext {
+        /** User can access this after the application finishes if local */
+        public static long finalSum;
+        /** Number of supersteps to run (6 by default) */
+        private int supersteps = 6;
+        /** Filename to indicate whether a fault was found */
+        public final String faultFile = "/tmp/faultFile";
+        /** Enable the fault at the particular vertex id and superstep? */
+        private boolean enableFault = false;
+
+		@Override
+		public void preApplication()
+		        throws InstantiationException, IllegalAccessException {
+		    registerAggregator(LongSumAggregator.class.getName(),
+					LongSumAggregator.class);
+		    LongSumAggregator sumAggregator = (LongSumAggregator)
+		    getAggregator(LongSumAggregator.class.getName());
+		    sumAggregator.setAggregatedValue(new LongWritable(0));
+		    supersteps = getContext().getConfiguration()
+		        .getInt(SUPERSTEP_COUNT, supersteps);
+		    enableFault = getContext().getConfiguration()
+		        .getBoolean(ENABLE_FAULT, false);
+		}
+
+		@Override
+		public void postApplication() {
+		    LongSumAggregator sumAggregator = (LongSumAggregator)
+		        getAggregator(LongSumAggregator.class.getName());
+		    finalSum = sumAggregator.getAggregatedValue().get();
+		    LOG.info("finalSum="+ finalSum);
+		}
+
+		@Override
+		public void preSuperstep() {
+	        useAggregator(LongSumAggregator.class.getName());
+		}
+
+		@Override
+		public void postSuperstep() { }
+
+		public int getSupersteps() {
+		    return this.supersteps;
+		}
+
+		public boolean getEnableFault() {
+		    return this.enableFault;
+		}
+    }
+
+    @Override
+    public int run(String[] args) throws Exception {
+        Options options = new Options();
+        options.addOption("h", "help", false, "Help");
+        options.addOption("v", "verbose", false, "Verbose");
+        options.addOption("w",
+                          "workers",
+                          true,
+                          "Number of workers");
+        options.addOption("s",
+                          "supersteps",
+                          true,
+                          "Supersteps to execute before finishing");
+        options.addOption("w",
+                          "workers",
+                          true,
+                          "Minimum number of workers");
+        options.addOption("o",
+                          "outputDirectory",
+                          true,
+                          "Output directory");
+        HelpFormatter formatter = new HelpFormatter();
+        if (args.length == 0) {
+            formatter.printHelp(getClass().getName(), options, true);
+            return 0;
+        }
+        CommandLineParser parser = new PosixParser();
+        CommandLine cmd = parser.parse(options, args);
+        if (cmd.hasOption('h')) {
+            formatter.printHelp(getClass().getName(), options, true);
+            return 0;
+        }
+        if (!cmd.hasOption('w')) {
+            System.out.println("Need to choose the number of workers (-w)");
+            return -1;
+        }
+        if (!cmd.hasOption('o')) {
+            System.out.println("Need to set the output directory (-o)");
+            return -1;
+        }
+
+        GiraphJob bspJob = new GiraphJob(getConf(), getClass().getName());
+        bspJob.setVertexClass(getClass());
+        bspJob.setVertexInputFormatClass(GeneratedVertexInputFormat.class);
+        bspJob.setVertexOutputFormatClass(SimpleTextVertexOutputFormat.class);
+        bspJob.setWorkerContextClass(SimpleCheckpointVertexWorkerContext.class);
+        int minWorkers = Integer.parseInt(cmd.getOptionValue('w'));
+        int maxWorkers = Integer.parseInt(cmd.getOptionValue('w'));
+        bspJob.setWorkerConfiguration(minWorkers, maxWorkers, 100.0f);
+
+        FileOutputFormat.setOutputPath(bspJob,
+                                       new Path(cmd.getOptionValue('o')));
+        boolean verbose = false;
+        if (cmd.hasOption('v')) {
+            verbose = true;
+        }
+        if (cmd.hasOption('s')) {
+            getConf().setInt(SUPERSTEP_COUNT,
+                             Integer.parseInt(cmd.getOptionValue('s')));
+        }
+        if (bspJob.run(verbose) == true) {
+            return 0;
+        } else {
+            return -1;
+        }
+    }
+
+    public static void main(String[] args) throws Exception {
+        System.exit(ToolRunner.run(new SimpleCheckpointVertex(), args));
+    }
+
+    @Override
+    public Configuration getConf() {
+        return conf;
+    }
+
+    @Override
+    public void setConf(Configuration conf) {
+        this.conf = conf;
+    }
+}
diff --git a/src/main/java/org/apache/giraph/examples/SimpleCombinerVertex.java b/src/main/java/org/apache/giraph/examples/SimpleCombinerVertex.java
new file mode 100644
index 0000000..1f96c5d
--- /dev/null
+++ b/src/main/java/org/apache/giraph/examples/SimpleCombinerVertex.java
@@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.examples;
+
+import java.util.Iterator;
+
+import org.apache.hadoop.io.FloatWritable;
+import org.apache.hadoop.io.IntWritable;
+import org.apache.hadoop.io.LongWritable;
+
+import org.apache.giraph.graph.EdgeListVertex;
+
+/**
+ * Test whether messages can go through a combiner.
+ */
+public class SimpleCombinerVertex extends
+        EdgeListVertex<LongWritable, IntWritable, FloatWritable, IntWritable> {
+    @Override
+    public void compute(Iterator<IntWritable> msgIterator) {
+        if (getVertexId().equals(new LongWritable(2))) {
+            sendMsg(new LongWritable(1), new IntWritable(101));
+            sendMsg(new LongWritable(1), new IntWritable(102));
+            sendMsg(new LongWritable(1), new IntWritable(103));
+        }
+        if (!getVertexId().equals(new LongWritable(1))) {
+            voteToHalt();
+        }
+        else {
+            // Check the messages
+            int sum = 0;
+            int num = 0;
+            while (msgIterator != null && msgIterator.hasNext()) {
+                sum += msgIterator.next().get();
+                num++;
+            }
+            System.out.println("TestCombinerVertex: Received a sum of " + sum +
+            " (should have 306 with a single message value)");
+
+            if (num == 1 && sum == 306) {
+                voteToHalt();
+            }
+        }
+        if (getSuperstep() > 3) {
+            throw new IllegalStateException(
+                "TestCombinerVertex: Vertex 1 failed to receive " +
+                "messages in time");
+        }
+    }
+}
diff --git a/src/main/java/org/apache/giraph/examples/SimpleFailVertex.java b/src/main/java/org/apache/giraph/examples/SimpleFailVertex.java
new file mode 100644
index 0000000..71117c0
--- /dev/null
+++ b/src/main/java/org/apache/giraph/examples/SimpleFailVertex.java
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.examples;
+
+import org.apache.giraph.graph.EdgeListVertex;
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.FloatWritable;
+import org.apache.hadoop.io.LongWritable;
+
+import java.util.Iterator;
+
+/**
+ * Vertex to allow unit testing of failure detection
+ */
+public class SimpleFailVertex extends
+        EdgeListVertex<LongWritable, DoubleWritable,
+        FloatWritable, DoubleWritable> {
+
+    static long superstep = 0;
+
+    @Override
+    public void compute(Iterator<DoubleWritable> msgIterator) {
+        if (getSuperstep() >= 1) {
+            double sum = 0;
+            while (msgIterator.hasNext()) {
+                sum += msgIterator.next().get();
+            }
+            DoubleWritable vertexValue =
+                new DoubleWritable((0.15f / getNumVertices()) + 0.85f * sum);
+            setVertexValue(vertexValue);
+            if (getSuperstep() < 30) {
+                if (getSuperstep() == 20) {
+                    if (getVertexId().get() == 10L) {
+                        try {
+                            Thread.sleep(2000);
+                        } catch (InterruptedException e) {
+                        }
+                        System.exit(1);
+                    } else if (getSuperstep() - superstep > 10) {
+                        return;
+                    }
+                }
+                long edges = getNumOutEdges();
+                sendMsgToAllEdges(
+                    new DoubleWritable(getVertexValue().get() / edges));
+            } else {
+                voteToHalt();
+            }
+            superstep = getSuperstep();
+        }
+    }
+}
diff --git a/src/main/java/org/apache/giraph/examples/SimpleMsgVertex.java b/src/main/java/org/apache/giraph/examples/SimpleMsgVertex.java
new file mode 100644
index 0000000..83a35bc
--- /dev/null
+++ b/src/main/java/org/apache/giraph/examples/SimpleMsgVertex.java
@@ -0,0 +1,63 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.examples;
+
+import java.util.Iterator;
+
+import org.apache.hadoop.io.FloatWritable;
+import org.apache.hadoop.io.IntWritable;
+import org.apache.hadoop.io.LongWritable;
+
+import org.apache.giraph.graph.EdgeListVertex;
+
+/**
+ * Test whether messages can be sent and received by vertices.
+ */
+public class SimpleMsgVertex extends
+        EdgeListVertex<LongWritable, IntWritable, FloatWritable, IntWritable> {
+    @Override
+    public void compute(Iterator<IntWritable> msgIterator) {
+        if (getVertexId().equals(new LongWritable(2))) {
+            sendMsg(new LongWritable(1), new IntWritable(101));
+            sendMsg(new LongWritable(1), new IntWritable(102));
+            sendMsg(new LongWritable(1), new IntWritable(103));
+        }
+        if (!getVertexId().equals(new LongWritable(1))) {
+            voteToHalt();
+        }
+        else {
+            /* Check the messages */
+            int sum = 0;
+            while (msgIterator != null && msgIterator.hasNext()) {
+                sum += msgIterator.next().get();
+            }
+            System.out.println("TestMsgVertex: Received a sum of " + sum +
+            " (will stop on 306)");
+
+            if (sum == 306) {
+                voteToHalt();
+            }
+        }
+        if (getSuperstep() > 3) {
+            System.err.println("TestMsgVertex: Vertex 1 failed to receive " +
+                               "messages in time");
+            voteToHalt();
+        }
+    }
+}
diff --git a/src/main/java/org/apache/giraph/examples/SimpleMutateGraphVertex.java b/src/main/java/org/apache/giraph/examples/SimpleMutateGraphVertex.java
new file mode 100644
index 0000000..ce23af5
--- /dev/null
+++ b/src/main/java/org/apache/giraph/examples/SimpleMutateGraphVertex.java
@@ -0,0 +1,196 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.examples;
+
+import java.io.IOException;
+import java.util.Iterator;
+
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.FloatWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.log4j.Logger;
+
+import org.apache.giraph.graph.BasicVertex;
+import org.apache.giraph.graph.Edge;
+import org.apache.giraph.graph.EdgeListVertex;
+import org.apache.giraph.graph.WorkerContext;
+
+/**
+ * Vertex to allow unit testing of graph mutations.
+ */
+public class SimpleMutateGraphVertex extends
+        EdgeListVertex<LongWritable, DoubleWritable,
+        FloatWritable, DoubleWritable> {
+    /** Maximum number of ranges for vertex ids */
+    private long maxRanges = 100;
+    /** Class logger */
+    private static Logger LOG =
+        Logger.getLogger(SimpleMutateGraphVertex.class);
+
+    /**
+     * Unless we create a ridiculous number of vertices , we should not
+     * collide within a vertex range defined by this method.
+     *
+     * @return Starting vertex id of the range
+     */
+    private long rangeVertexIdStart(int range) {
+        return (Long.MAX_VALUE / maxRanges) * range;
+    }
+
+    @Override
+    public void compute(Iterator<DoubleWritable> msgIterator)
+            throws IOException {
+
+    	SimpleMutateGraphVertexWorkerContext workerContext =
+    		(SimpleMutateGraphVertexWorkerContext) getWorkerContext();
+    	if (getSuperstep() == 0) {
+    	} else if (getSuperstep() == 1) {
+            // Send messages to vertices that are sure not to exist
+            // (creating them)
+            LongWritable destVertexId =
+                new LongWritable(rangeVertexIdStart(1) + getVertexId().get());
+            sendMsg(destVertexId, new DoubleWritable(0.0));
+        } else if (getSuperstep() == 2) {
+        } else if (getSuperstep() == 3) {
+        	long vertexCount = workerContext.getVertexCount();
+            if (vertexCount * 2 != getNumVertices()) {
+                throw new IllegalStateException(
+                    "Impossible to have " + getNumVertices() +
+                    " vertices when should have " + vertexCount * 2 +
+                    " on superstep " + getSuperstep());
+            }
+            long edgeCount = workerContext.getEdgeCount();
+            if (edgeCount != getNumEdges()) {
+                throw new IllegalStateException(
+                    "Impossible to have " + getNumEdges() +
+                    " edges when should have " + edgeCount +
+                    " on superstep " + getSuperstep());
+            }
+            // Create vertices that are sure not to exist (doubling vertices)
+            LongWritable vertexIndex =
+                new LongWritable(rangeVertexIdStart(3) + getVertexId().get());
+            BasicVertex<LongWritable, DoubleWritable,
+                FloatWritable, DoubleWritable> vertex =
+                    instantiateVertex(vertexIndex, null, null, null);
+            addVertexRequest(vertex);
+            // Add edges to those remote vertices as well
+            addEdgeRequest(vertexIndex,
+                           new Edge<LongWritable, FloatWritable>(
+                               getVertexId(), new FloatWritable(0.0f)));
+        } else if (getSuperstep() == 4) {
+        } else if (getSuperstep() == 5) {
+        	long vertexCount = workerContext.getVertexCount();
+            if (vertexCount * 2 != getNumVertices()) {
+                throw new IllegalStateException(
+                    "Impossible to have " + getNumVertices() +
+                    " when should have " + vertexCount * 2 +
+                    " on superstep " + getSuperstep());
+            }
+            long edgeCount = workerContext.getEdgeCount();
+            if (edgeCount + vertexCount != getNumEdges()) {
+                throw new IllegalStateException(
+                    "Impossible to have " + getNumEdges() +
+                    " edges when should have " + edgeCount + vertexCount +
+                    " on superstep " + getSuperstep());
+            }
+            // Remove the edges created in superstep 3
+            LongWritable vertexIndex =
+                new LongWritable(rangeVertexIdStart(3) + getVertexId().get());
+            workerContext.increaseEdgesRemoved();
+            removeEdgeRequest(vertexIndex, getVertexId());
+        } else if (getSuperstep() == 6) {
+            // Remove all the vertices created in superstep 3
+            if (getVertexId().compareTo(
+                    new LongWritable(rangeVertexIdStart(3))) >= 0) {
+                removeVertexRequest(getVertexId());
+            }
+        } else if (getSuperstep() == 7) {
+        	long orig_edge_count = workerContext.getOrigEdgeCount();
+            if (orig_edge_count != getNumEdges()) {
+                throw new IllegalStateException(
+                    "Impossible to have " + getNumEdges() +
+                    " edges when should have " + orig_edge_count +
+                    " on superstep " + getSuperstep());
+            }
+        } else if (getSuperstep() == 8) {
+        	long vertex_count = workerContext.getVertexCount();
+            if (vertex_count / 2 != getNumVertices()) {
+                throw new IllegalStateException(
+                    "Impossible to have " + getNumVertices() +
+                    " vertices when should have " + vertex_count / 2 +
+                    " on superstep " + getSuperstep());
+            }
+        }
+        else {
+            voteToHalt();
+        }
+    }
+
+    public static class SimpleMutateGraphVertexWorkerContext
+            extends WorkerContext {
+        /** Cached vertex count */
+        private long vertexCount;
+        /** Cached edge count */
+        private long edgeCount;
+        /** Original number of edges */
+        private long origEdgeCount;
+        /** Number of edges removed during superstep */
+        private int edgesRemoved = 0;
+
+		@Override
+		public void preApplication()
+				throws InstantiationException, IllegalAccessException { }
+
+		@Override
+		public void postApplication() { }
+
+        @Override
+        public void preSuperstep() { }
+
+		@Override
+		public void postSuperstep() {
+			vertexCount = getNumVertices();
+			edgeCount = getNumEdges();
+			if (getSuperstep() == 1) {
+				origEdgeCount = edgeCount;
+			}
+			LOG.info("Got " + vertexCount + " vertices, " +
+					 edgeCount + " edges on superstep " +
+					 getSuperstep());
+			LOG.info("Removed " + edgesRemoved);
+			edgesRemoved = 0;
+		}
+
+		public long getVertexCount() {
+			return vertexCount;
+		}
+
+		public long getEdgeCount() {
+			return edgeCount;
+		}
+
+		public long getOrigEdgeCount() {
+			return origEdgeCount;
+		}
+
+		public void increaseEdgesRemoved() {
+			this.edgesRemoved++;
+		}
+    }
+}
diff --git a/src/main/java/org/apache/giraph/examples/SimplePageRankVertex.java b/src/main/java/org/apache/giraph/examples/SimplePageRankVertex.java
new file mode 100644
index 0000000..5e37075
--- /dev/null
+++ b/src/main/java/org/apache/giraph/examples/SimplePageRankVertex.java
@@ -0,0 +1,249 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.examples;
+
+import com.google.common.collect.Maps;
+
+import org.apache.giraph.graph.BasicVertex;
+import org.apache.giraph.graph.BspUtils;
+import org.apache.giraph.graph.LongDoubleFloatDoubleVertex;
+import org.apache.giraph.graph.VertexReader;
+import org.apache.giraph.graph.VertexWriter;
+import org.apache.giraph.graph.WorkerContext;
+import org.apache.giraph.lib.TextVertexOutputFormat;
+import org.apache.giraph.lib.TextVertexOutputFormat.TextVertexWriter;
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.FloatWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.RecordWriter;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.log4j.Logger;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Map;
+
+/**
+ * Demonstrates the basic Pregel PageRank implementation.
+ */
+public class SimplePageRankVertex extends LongDoubleFloatDoubleVertex {
+    /** Number of supersteps for this test */
+    public static final int MAX_SUPERSTEPS = 30;
+    /** Logger */
+    private static final Logger LOG =
+        Logger.getLogger(SimplePageRankVertex.class);
+
+    @Override
+    public void compute(Iterator<DoubleWritable> msgIterator) {
+        LongSumAggregator sumAggreg = (LongSumAggregator) getAggregator("sum");
+        MinAggregator minAggreg = (MinAggregator) getAggregator("min");
+        MaxAggregator maxAggreg = (MaxAggregator) getAggregator("max");
+        if (getSuperstep() >= 1) {
+            double sum = 0;
+            while (msgIterator.hasNext()) {
+                sum += msgIterator.next().get();
+            }
+            DoubleWritable vertexValue =
+                new DoubleWritable((0.15f / getNumVertices()) + 0.85f * sum);
+            setVertexValue(vertexValue);
+            maxAggreg.aggregate(vertexValue);
+            minAggreg.aggregate(vertexValue);
+            sumAggreg.aggregate(1L);
+            LOG.info(getVertexId() + ": PageRank=" + vertexValue +
+                     " max=" + maxAggreg.getAggregatedValue() +
+                     " min=" + minAggreg.getAggregatedValue());
+        }
+
+        if (getSuperstep() < MAX_SUPERSTEPS) {
+            long edges = getNumOutEdges();
+            sendMsgToAllEdges(
+                new DoubleWritable(getVertexValue().get() / edges));
+        } else {
+            voteToHalt();
+        }
+    }
+
+	public static class SimplePageRankVertexWorkerContext extends
+    		WorkerContext {
+
+    	public static double finalMax, finalMin;
+    	public static long finalSum;
+    	
+    	@Override
+    	public void preApplication() 
+    	throws InstantiationException, IllegalAccessException {
+    		
+    		registerAggregator("sum", LongSumAggregator.class);
+    		registerAggregator("min", MinAggregator.class);
+    		registerAggregator("max", MaxAggregator.class);			
+    	}
+
+    	@Override
+    	public void postApplication() {
+
+    		LongSumAggregator sumAggreg = 
+    			(LongSumAggregator) getAggregator("sum");
+    		MinAggregator minAggreg = 
+    			(MinAggregator) getAggregator("min");
+    		MaxAggregator maxAggreg = 
+    			(MaxAggregator) getAggregator("max");
+
+    		finalSum = sumAggreg.getAggregatedValue().get();
+    		finalMax = maxAggreg.getAggregatedValue().get();
+    		finalMin = minAggreg.getAggregatedValue().get();
+    		
+            LOG.info("aggregatedNumVertices=" + finalSum);
+            LOG.info("aggregatedMaxPageRank=" + finalMax);
+            LOG.info("aggregatedMinPageRank=" + finalMin);
+    	}
+
+		@Override
+		public void preSuperstep() {
+    		
+	        LongSumAggregator sumAggreg = 
+	        	(LongSumAggregator) getAggregator("sum");
+	        MinAggregator minAggreg = 
+	        	(MinAggregator) getAggregator("min");
+	        MaxAggregator maxAggreg = 
+	        	(MaxAggregator) getAggregator("max");
+	        
+	        if (getSuperstep() >= 3) {
+	            LOG.info("aggregatedNumVertices=" +
+	                    sumAggreg.getAggregatedValue() +
+	                    " NumVertices=" + getNumVertices());
+	            if (sumAggreg.getAggregatedValue().get() != getNumVertices()) {
+	                throw new RuntimeException("wrong value of SumAggreg: " +
+	                        sumAggreg.getAggregatedValue() + ", should be: " +
+	                        getNumVertices());
+	            }
+	            DoubleWritable maxPagerank =
+	                    (DoubleWritable) maxAggreg.getAggregatedValue();
+	            LOG.info("aggregatedMaxPageRank=" + maxPagerank.get());
+	            DoubleWritable minPagerank =
+	                    (DoubleWritable) minAggreg.getAggregatedValue();
+	            LOG.info("aggregatedMinPageRank=" + minPagerank.get());
+	        }
+	        useAggregator("sum");
+	        useAggregator("min");
+	        useAggregator("max");
+	        sumAggreg.setAggregatedValue(new LongWritable(0L));
+		}
+
+		@Override
+		public void postSuperstep() { }
+    }
+    
+    /**
+     * Simple VertexReader that supports {@link SimplePageRankVertex}
+     */
+    public static class SimplePageRankVertexReader extends
+            GeneratedVertexReader<LongWritable, DoubleWritable, FloatWritable,
+                DoubleWritable> {
+        /** Class logger */
+        private static final Logger LOG =
+            Logger.getLogger(SimplePageRankVertexReader.class);
+
+        public SimplePageRankVertexReader() {
+            super();
+        }
+
+        @Override
+        public boolean nextVertex() {
+            return totalRecords > recordsRead;
+        }
+
+        @Override
+        public BasicVertex<LongWritable, DoubleWritable, FloatWritable, DoubleWritable>
+          getCurrentVertex() throws IOException {
+            BasicVertex<LongWritable, DoubleWritable, FloatWritable, DoubleWritable>
+                vertex = BspUtils.createVertex(configuration);
+
+            LongWritable vertexId = new LongWritable(
+                (inputSplit.getSplitIndex() * totalRecords) + recordsRead);
+            DoubleWritable vertexValue = new DoubleWritable(vertexId.get() * 10d);
+            long destVertexId =
+                (vertexId.get() + 1) %
+                (inputSplit.getNumSplits() * totalRecords);
+            float edgeValue = vertexId.get() * 100f;
+            Map<LongWritable, FloatWritable> edges = Maps.newHashMap();
+            edges.put(new LongWritable(destVertexId), new FloatWritable(edgeValue));
+            vertex.initialize(vertexId, vertexValue, edges, null);
+            ++recordsRead;
+            if (LOG.isInfoEnabled()) {
+	        LOG.info("next: Return vertexId=" + vertex.getVertexId().get() +
+	                 ", vertexValue=" + vertex.getVertexValue() +
+	                 ", destinationId=" + destVertexId + ", edgeValue=" + edgeValue);
+            }
+            return vertex;
+        }
+    }
+
+    /**
+     * Simple VertexInputFormat that supports {@link SimplePageRankVertex}
+     */
+    public static class SimplePageRankVertexInputFormat extends
+            GeneratedVertexInputFormat<LongWritable,
+            DoubleWritable, FloatWritable, DoubleWritable> {
+        @Override
+        public VertexReader<LongWritable, DoubleWritable, FloatWritable, DoubleWritable>
+                createVertexReader(InputSplit split,
+                                   TaskAttemptContext context)
+                                   throws IOException {
+            return new SimplePageRankVertexReader();
+        }
+    }
+
+    /**
+     * Simple VertexWriter that supports {@link SimplePageRankVertex}
+     */
+    public static class SimplePageRankVertexWriter extends
+            TextVertexWriter<LongWritable, DoubleWritable, FloatWritable> {
+        public SimplePageRankVertexWriter(
+                RecordWriter<Text, Text> lineRecordWriter) {
+            super(lineRecordWriter);
+        }
+
+        @Override
+        public void writeVertex(
+                BasicVertex<LongWritable, DoubleWritable, FloatWritable, ?> vertex)
+                throws IOException, InterruptedException {
+            getRecordWriter().write(
+                new Text(vertex.getVertexId().toString()),
+                new Text(vertex.getVertexValue().toString()));
+        }
+    }
+
+    /**
+     * Simple VertexOutputFormat that supports {@link SimplePageRankVertex}
+     */
+    public static class SimplePageRankVertexOutputFormat extends
+            TextVertexOutputFormat<LongWritable, DoubleWritable, FloatWritable> {
+
+        @Override
+        public VertexWriter<LongWritable, DoubleWritable, FloatWritable>
+            createVertexWriter(TaskAttemptContext context)
+                throws IOException, InterruptedException {
+            RecordWriter<Text, Text> recordWriter =
+                textOutputFormat.getRecordWriter(context);
+            return new SimplePageRankVertexWriter(recordWriter);
+        }
+    }
+}
diff --git a/src/main/java/org/apache/giraph/examples/SimpleShortestPathsVertex.java b/src/main/java/org/apache/giraph/examples/SimpleShortestPathsVertex.java
new file mode 100644
index 0000000..71253cf
--- /dev/null
+++ b/src/main/java/org/apache/giraph/examples/SimpleShortestPathsVertex.java
@@ -0,0 +1,270 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.examples;
+
+import com.google.common.base.Preconditions;
+import com.google.common.collect.Maps;
+import org.apache.giraph.graph.BasicVertex;
+import org.apache.giraph.graph.BspUtils;
+import org.apache.giraph.graph.GiraphJob;
+import org.apache.giraph.graph.EdgeListVertex;
+import org.apache.giraph.graph.VertexReader;
+import org.apache.giraph.graph.VertexWriter;
+import org.apache.giraph.lib.TextVertexInputFormat;
+import org.apache.giraph.lib.TextVertexInputFormat.TextVertexReader;
+import org.apache.giraph.lib.TextVertexOutputFormat;
+import org.apache.giraph.lib.TextVertexOutputFormat.TextVertexWriter;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.FloatWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.RecordReader;
+import org.apache.hadoop.mapreduce.RecordWriter;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
+import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.log4j.Logger;
+import org.json.JSONArray;
+import org.json.JSONException;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Map;
+
+/**
+ * Demonstrates the basic Pregel shortest paths implementation.
+ */
+public class SimpleShortestPathsVertex extends
+        EdgeListVertex<LongWritable, DoubleWritable,
+        FloatWritable, DoubleWritable> implements Tool {
+    /** Configuration */
+    private Configuration conf;
+    /** Class logger */
+    private static final Logger LOG =
+        Logger.getLogger(SimpleShortestPathsVertex.class);
+    /** The shortest paths id */
+    public static String SOURCE_ID = "SimpleShortestPathsVertex.sourceId";
+    /** Default shortest paths id */
+    public static long SOURCE_ID_DEFAULT = 1;
+
+    /**
+     * Is this vertex the source id?
+     *
+     * @return True if the source id
+     */
+    private boolean isSource() {
+        return (getVertexId().get() ==
+            getContext().getConfiguration().getLong(SOURCE_ID,
+                                                    SOURCE_ID_DEFAULT));
+    }
+
+    @Override
+    public void compute(Iterator<DoubleWritable> msgIterator) {
+        if (getSuperstep() == 0) {
+            setVertexValue(new DoubleWritable(Double.MAX_VALUE));
+        }
+        double minDist = isSource() ? 0d : Double.MAX_VALUE;
+        while (msgIterator.hasNext()) {
+            minDist = Math.min(minDist, msgIterator.next().get());
+        }
+        if (LOG.isDebugEnabled()) {
+            LOG.debug("Vertex " + getVertexId() + " got minDist = " + minDist +
+                     " vertex value = " + getVertexValue());
+        }
+        if (minDist < getVertexValue().get()) {
+            setVertexValue(new DoubleWritable(minDist));
+            for (LongWritable targetVertexId : this) {
+                FloatWritable edgeValue = getEdgeValue(targetVertexId);
+                if (LOG.isDebugEnabled()) {
+                    LOG.debug("Vertex " + getVertexId() + " sent to " +
+                              targetVertexId + " = " +
+                              (minDist + edgeValue.get()));
+                }
+                sendMsg(targetVertexId,
+                        new DoubleWritable(minDist + edgeValue.get()));
+            }
+        }
+        voteToHalt();
+    }
+
+    /**
+     * VertexInputFormat that supports {@link SimpleShortestPathsVertex}
+     */
+    public static class SimpleShortestPathsVertexInputFormat extends
+            TextVertexInputFormat<LongWritable,
+                                  DoubleWritable,
+                                  FloatWritable,
+                                  DoubleWritable> {
+        @Override
+        public VertexReader<LongWritable, DoubleWritable, FloatWritable, DoubleWritable>
+                createVertexReader(InputSplit split,
+                                   TaskAttemptContext context)
+                                   throws IOException {
+            return new SimpleShortestPathsVertexReader(
+                textInputFormat.createRecordReader(split, context));
+        }
+    }
+
+    /**
+     * VertexReader that supports {@link SimpleShortestPathsVertex}.  In this
+     * case, the edge values are not used.  The files should be in the
+     * following JSON format:
+     * JSONArray(<vertex id>, <vertex value>,
+     *           JSONArray(JSONArray(<dest vertex id>, <edge value>), ...))
+     * Here is an example with vertex id 1, vertex value 4.3, and two edges.
+     * First edge has a destination vertex 2, edge value 2.1.
+     * Second edge has a destination vertex 3, edge value 0.7.
+     * [1,4.3,[[2,2.1],[3,0.7]]]
+     */
+    public static class SimpleShortestPathsVertexReader extends
+            TextVertexReader<LongWritable,
+                DoubleWritable, FloatWritable, DoubleWritable> {
+
+        public SimpleShortestPathsVertexReader(
+                RecordReader<LongWritable, Text> lineRecordReader) {
+            super(lineRecordReader);
+        }
+
+        @Override
+        public BasicVertex<LongWritable, DoubleWritable, FloatWritable,
+                           DoubleWritable> getCurrentVertex()
+            throws IOException, InterruptedException {
+          BasicVertex<LongWritable, DoubleWritable, FloatWritable,
+              DoubleWritable> vertex = BspUtils.<LongWritable, DoubleWritable, FloatWritable,
+                  DoubleWritable>createVertex(getContext().getConfiguration());
+
+            Text line = getRecordReader().getCurrentValue();
+            try {
+                JSONArray jsonVertex = new JSONArray(line.toString());
+                LongWritable vertexId = new LongWritable(jsonVertex.getLong(0));
+                DoubleWritable vertexValue = new DoubleWritable(jsonVertex.getDouble(1));
+                Map<LongWritable, FloatWritable> edges = Maps.newHashMap();
+                JSONArray jsonEdgeArray = jsonVertex.getJSONArray(2);
+                for (int i = 0; i < jsonEdgeArray.length(); ++i) {
+                    JSONArray jsonEdge = jsonEdgeArray.getJSONArray(i);
+                    edges.put(new LongWritable(jsonEdge.getLong(0)),
+                            new FloatWritable((float) jsonEdge.getDouble(1)));
+                }
+                vertex.initialize(vertexId, vertexValue, edges, null);
+            } catch (JSONException e) {
+                throw new IllegalArgumentException(
+                    "next: Couldn't get vertex from line " + line, e);
+            }
+            return vertex;
+        }
+
+        @Override
+        public boolean nextVertex() throws IOException, InterruptedException {
+            return getRecordReader().nextKeyValue();
+        }
+    }
+
+    /**
+     * VertexOutputFormat that supports {@link SimpleShortestPathsVertex}
+     */
+    public static class SimpleShortestPathsVertexOutputFormat extends
+            TextVertexOutputFormat<LongWritable, DoubleWritable,
+            FloatWritable> {
+
+        @Override
+        public VertexWriter<LongWritable, DoubleWritable, FloatWritable>
+                createVertexWriter(TaskAttemptContext context)
+                throws IOException, InterruptedException {
+            RecordWriter<Text, Text> recordWriter =
+                textOutputFormat.getRecordWriter(context);
+            return new SimpleShortestPathsVertexWriter(recordWriter);
+        }
+    }
+
+    /**
+     * VertexWriter that supports {@link SimpleShortestPathsVertex}
+     */
+    public static class SimpleShortestPathsVertexWriter extends
+            TextVertexWriter<LongWritable, DoubleWritable, FloatWritable> {
+        public SimpleShortestPathsVertexWriter(
+                RecordWriter<Text, Text> lineRecordWriter) {
+            super(lineRecordWriter);
+        }
+
+        @Override
+        public void writeVertex(BasicVertex<LongWritable, DoubleWritable,
+                                FloatWritable, ?> vertex)
+                throws IOException, InterruptedException {
+            JSONArray jsonVertex = new JSONArray();
+            try {
+                jsonVertex.put(vertex.getVertexId().get());
+                jsonVertex.put(vertex.getVertexValue().get());
+                JSONArray jsonEdgeArray = new JSONArray();
+                for (LongWritable targetVertexId : vertex) {
+                    JSONArray jsonEdge = new JSONArray();
+                    jsonEdge.put(targetVertexId.get());
+                    jsonEdge.put(vertex.getEdgeValue(targetVertexId).get());
+                    jsonEdgeArray.put(jsonEdge);
+                }
+                jsonVertex.put(jsonEdgeArray);
+            } catch (JSONException e) {
+                throw new IllegalArgumentException(
+                    "writeVertex: Couldn't write vertex " + vertex);
+            }
+            getRecordWriter().write(new Text(jsonVertex.toString()), null);
+        }
+    }
+
+    @Override
+    public Configuration getConf() {
+        return conf;
+    }
+
+    @Override
+    public void setConf(Configuration conf) {
+        this.conf = conf;
+    }
+
+    @Override
+    public int run(String[] argArray) throws Exception {
+        Preconditions.checkArgument(argArray.length == 4,
+            "run: Must have 4 arguments <input path> <output path> " +
+            "<source vertex id> <# of workers>");
+
+        GiraphJob job = new GiraphJob(getConf(), getClass().getName());
+        job.setVertexClass(getClass());
+        job.setVertexInputFormatClass(
+            SimpleShortestPathsVertexInputFormat.class);
+        job.setVertexOutputFormatClass(
+            SimpleShortestPathsVertexOutputFormat.class);
+        FileInputFormat.addInputPath(job, new Path(argArray[0]));
+        FileOutputFormat.setOutputPath(job, new Path(argArray[1]));
+        job.getConfiguration().setLong(SimpleShortestPathsVertex.SOURCE_ID,
+                                       Long.parseLong(argArray[2]));
+        job.setWorkerConfiguration(Integer.parseInt(argArray[3]),
+                                   Integer.parseInt(argArray[3]),
+                                   100.0f);
+
+        return job.run(true) ? 0 : -1;
+    }
+
+    public static void main(String[] args) throws Exception {
+        System.exit(ToolRunner.run(new SimpleShortestPathsVertex(), args));
+    }
+}
diff --git a/src/main/java/org/apache/giraph/examples/SimpleSumCombiner.java b/src/main/java/org/apache/giraph/examples/SimpleSumCombiner.java
new file mode 100644
index 0000000..139885f
--- /dev/null
+++ b/src/main/java/org/apache/giraph/examples/SimpleSumCombiner.java
@@ -0,0 +1,48 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.examples;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.hadoop.io.IntWritable;
+import org.apache.hadoop.io.LongWritable;
+
+import org.apache.giraph.graph.VertexCombiner;
+
+/**
+ * Test whether combiner is called by summing up the messages.
+ */
+public class SimpleSumCombiner
+        extends VertexCombiner<LongWritable, IntWritable> {
+
+    @Override
+    public Iterable<IntWritable> combine(LongWritable vertexIndex,
+            Iterable<IntWritable> messages) throws IOException {
+        int sum = 0;
+        for (IntWritable msg : messages) {
+            sum += msg.get();
+        }
+        List<IntWritable> value = new ArrayList<IntWritable>();
+        value.add(new IntWritable(sum));
+        
+        return value;
+    }
+}
diff --git a/src/main/java/org/apache/giraph/examples/SimpleSuperstepVertex.java b/src/main/java/org/apache/giraph/examples/SimpleSuperstepVertex.java
new file mode 100644
index 0000000..82aeb1b
--- /dev/null
+++ b/src/main/java/org/apache/giraph/examples/SimpleSuperstepVertex.java
@@ -0,0 +1,158 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.examples;
+
+import com.google.common.collect.Maps;
+import org.apache.giraph.graph.BasicVertex;
+import org.apache.giraph.graph.BspUtils;
+import org.apache.giraph.graph.EdgeListVertex;
+import org.apache.giraph.graph.VertexReader;
+import org.apache.giraph.graph.VertexWriter;
+import org.apache.giraph.lib.TextVertexOutputFormat;
+import org.apache.giraph.lib.TextVertexOutputFormat.TextVertexWriter;
+import org.apache.hadoop.io.FloatWritable;
+import org.apache.hadoop.io.IntWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.RecordWriter;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.log4j.Logger;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Map;
+
+/**
+ * Just a simple Vertex compute implementation that executes 3 supersteps, then
+ * finishes.
+ */
+public class SimpleSuperstepVertex extends
+        EdgeListVertex<LongWritable, IntWritable, FloatWritable, IntWritable> {
+    @Override
+    public void compute(Iterator<IntWritable> msgIterator) {
+        if (getSuperstep() > 3) {
+            voteToHalt();
+        }
+    }
+
+    /**
+     * Simple VertexReader that supports {@link SimpleSuperstepVertex}
+     */
+    public static class SimpleSuperstepVertexReader extends
+            GeneratedVertexReader<LongWritable, IntWritable,
+            FloatWritable, IntWritable> {
+        /** Class logger */
+        private static final Logger LOG =
+            Logger.getLogger(SimpleSuperstepVertexReader.class);
+        @Override
+        public boolean nextVertex() throws IOException, InterruptedException {
+            return totalRecords > recordsRead;
+        }
+
+        public SimpleSuperstepVertexReader() {
+            super();
+        }
+
+        @Override
+        public BasicVertex<LongWritable, IntWritable, FloatWritable,
+                IntWritable> getCurrentVertex()
+                throws IOException, InterruptedException {
+            BasicVertex<LongWritable, IntWritable,
+                        FloatWritable, IntWritable> vertex =
+                BspUtils.<LongWritable, IntWritable,
+                          FloatWritable, IntWritable>createVertex(
+                    configuration);
+            long tmpId = reverseIdOrder ?
+                ((inputSplit.getSplitIndex() + 1) * totalRecords) -
+                    recordsRead - 1 :
+                (inputSplit.getSplitIndex() * totalRecords) + recordsRead;
+            LongWritable vertexId = new LongWritable(tmpId);
+            IntWritable vertexValue =
+                new IntWritable((int) (vertexId.get() * 10));
+            Map<LongWritable, FloatWritable> edgeMap = Maps.newHashMap();
+            long destVertexId =
+                (vertexId.get() + 1) %
+                    (inputSplit.getNumSplits() * totalRecords);
+            float edgeValue = vertexId.get() * 100f;
+            edgeMap.put(new LongWritable(destVertexId),
+                        new FloatWritable(edgeValue));
+            vertex.initialize(vertexId, vertexValue, edgeMap, null);
+            ++recordsRead;
+            if (LOG.isInfoEnabled()) {
+                LOG.info("next: Return vertexId=" + vertex.getVertexId().get() +
+                         ", vertexValue=" + vertex.getVertexValue() +
+                         ", destinationId=" + destVertexId +
+                         ", edgeValue=" + edgeValue);
+            }
+            return vertex;
+        }
+    }
+
+    /**
+     * Simple VertexInputFormat that supports {@link SimpleSuperstepVertex}
+     */
+    public static class SimpleSuperstepVertexInputFormat extends
+            GeneratedVertexInputFormat<LongWritable,
+            IntWritable, FloatWritable, IntWritable> {
+        @Override
+        public VertexReader<LongWritable, IntWritable, FloatWritable, IntWritable>
+                createVertexReader(InputSplit split,
+                                   TaskAttemptContext context)
+                                   throws IOException {
+            return new SimpleSuperstepVertexReader();
+        }
+    }
+
+    /**
+     * Simple VertexWriter that supports {@link SimpleSuperstepVertex}
+     */
+    public static class SimpleSuperstepVertexWriter extends
+            TextVertexWriter<LongWritable, IntWritable, FloatWritable> {
+        public SimpleSuperstepVertexWriter(
+                RecordWriter<Text, Text> lineRecordWriter) {
+            super(lineRecordWriter);
+        }
+
+        @Override
+        public void writeVertex(
+                BasicVertex<LongWritable, IntWritable, FloatWritable, ?> vertex)
+                throws IOException, InterruptedException {
+            getRecordWriter().write(
+                new Text(vertex.getVertexId().toString()),
+                new Text(vertex.getVertexValue().toString()));
+        }
+    }
+
+    /**
+     * Simple VertexOutputFormat that supports {@link SimpleSuperstepVertex}
+     */
+    public static class SimpleSuperstepVertexOutputFormat extends
+            TextVertexOutputFormat<LongWritable, IntWritable, FloatWritable> {
+
+        @Override
+        public VertexWriter<LongWritable, IntWritable, FloatWritable>
+            createVertexWriter(TaskAttemptContext context)
+                throws IOException, InterruptedException {
+            RecordWriter<Text, Text> recordWriter =
+                textOutputFormat.getRecordWriter(context);
+            return new SimpleSuperstepVertexWriter(recordWriter);
+        }
+    }
+}
diff --git a/src/main/java/org/apache/giraph/examples/SimpleTextVertexOutputFormat.java b/src/main/java/org/apache/giraph/examples/SimpleTextVertexOutputFormat.java
new file mode 100644
index 0000000..8f652f0
--- /dev/null
+++ b/src/main/java/org/apache/giraph/examples/SimpleTextVertexOutputFormat.java
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.examples;
+
+import java.io.IOException;
+import org.apache.hadoop.io.FloatWritable;
+import org.apache.hadoop.io.IntWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.RecordWriter;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.giraph.graph.BasicVertex;
+import org.apache.giraph.graph.VertexWriter;
+import org.apache.giraph.lib.TextVertexOutputFormat;
+
+/**
+ * Simple text based vertex output format example.
+ */
+public class SimpleTextVertexOutputFormat extends
+         TextVertexOutputFormat<LongWritable, IntWritable, FloatWritable> {
+    /**
+     * Simple text based vertex writer
+     */
+    private static class SimpleTextVertexWriter
+            extends TextVertexWriter<LongWritable, IntWritable, FloatWritable> {
+
+        /**
+         * Initialize with the LineRecordWriter.
+         *
+         * @param lineRecordWriter Line record writer from TextOutputFormat
+         */
+        public SimpleTextVertexWriter(
+                RecordWriter<Text, Text> lineRecordWriter) {
+            super(lineRecordWriter);
+        }
+
+        @Override
+        public void writeVertex(
+                BasicVertex<LongWritable, IntWritable, FloatWritable, ?> vertex)
+                throws IOException, InterruptedException {
+            getRecordWriter().write(
+                new Text(vertex.getVertexId().toString()),
+                new Text(vertex.getVertexValue().toString()));
+        }
+    }
+
+    @Override
+    public VertexWriter<LongWritable, IntWritable, FloatWritable>
+        createVertexWriter(TaskAttemptContext context)
+            throws IOException, InterruptedException {
+        RecordWriter<Text, Text> recordWriter =
+            textOutputFormat.getRecordWriter(context);
+        return new SimpleTextVertexWriter(recordWriter);
+    }
+}
diff --git a/src/main/java/org/apache/giraph/examples/SimpleVertexWithWorkerContext.java b/src/main/java/org/apache/giraph/examples/SimpleVertexWithWorkerContext.java
new file mode 100644
index 0000000..1caa2ff
--- /dev/null
+++ b/src/main/java/org/apache/giraph/examples/SimpleVertexWithWorkerContext.java
@@ -0,0 +1,165 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.examples;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.Iterator;
+
+import org.apache.giraph.examples.SimpleSuperstepVertex.SimpleSuperstepVertexInputFormat;
+import org.apache.giraph.graph.GiraphJob;
+import org.apache.giraph.graph.EdgeListVertex;
+import org.apache.giraph.graph.WorkerContext;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.FloatWritable;
+import org.apache.hadoop.io.IntWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.mapreduce.Mapper.Context;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+/**
+ * Fully runnable example of how to
+ * emit worker data to HDFS during a graph
+ * computation.
+ */
+public class SimpleVertexWithWorkerContext extends
+        EdgeListVertex<LongWritable, IntWritable, FloatWritable, DoubleWritable>
+        implements Tool {
+
+    public static final String OUTPUTDIR = "svwwc.outputdir";
+    private static final int TESTLENGTH = 30;
+
+    @Override
+    public void compute(Iterator<DoubleWritable> msgIterator)
+            throws IOException {
+
+        long superstep = getSuperstep();
+
+        if (superstep < TESTLENGTH) {
+            EmitterWorkerContext emitter =
+                    (EmitterWorkerContext) getWorkerContext();
+            emitter.emit("vertexId=" + getVertexId() +
+                         " superstep=" + superstep + "\n");
+        } else {
+            voteToHalt();
+        }
+    }
+
+    @SuppressWarnings("rawtypes")
+	public static class EmitterWorkerContext extends WorkerContext {
+
+        private static final String FILENAME = "emitter_";
+        private DataOutputStream out;
+
+        @Override
+        public void preApplication() {
+            Context context = getContext();
+            FileSystem fs;
+
+            try {
+                fs = FileSystem.get(context.getConfiguration());
+
+                String p = context.getConfiguration()
+                    .get(SimpleVertexWithWorkerContext.OUTPUTDIR);
+                if (p == null) {
+                    throw new IllegalArgumentException(
+                        SimpleVertexWithWorkerContext.OUTPUTDIR +
+                        " undefined!");
+                }
+
+                Path path = new Path(p);
+                if (!fs.exists(path)) {
+                    throw new IllegalArgumentException(path +
+                            " doesn't exist");
+                }
+
+                Path outF = new Path(path, FILENAME +
+                        context.getTaskAttemptID());
+                if (fs.exists(outF)) {
+                    throw new IllegalArgumentException(outF +
+                            " aready exists");
+                }
+
+                out = fs.create(outF);
+            } catch (IOException e) {
+                throw new RuntimeException(
+                        "can't initialize WorkerContext", e);
+            }
+        }
+
+        @Override
+        public void postApplication() {
+            if (out != null) {
+                try {
+                    out.flush();
+                    out.close();
+                } catch (IOException e) {
+                    throw new RuntimeException(
+                            "can't finalize WorkerContext", e);
+                }
+                out = null;
+            }
+        }
+
+        @Override
+        public void preSuperstep() { }
+
+        @Override
+        public void postSuperstep() { }
+
+        public void emit(String s) {
+            try {
+                out.writeUTF(s);
+            } catch (IOException e) {
+                throw new RuntimeException("can't emit", e);
+            }
+        }
+    }
+
+    @Override
+    public int run(String[] args) throws Exception {
+        if (args.length != 2) {
+            throw new IllegalArgumentException(
+                "run: Must have 2 arguments <output path> <# of workers>");
+        }
+        GiraphJob job = new GiraphJob(getConf(), getClass().getName());
+        job.setVertexClass(getClass());
+        job.setVertexInputFormatClass(
+            SimpleSuperstepVertexInputFormat.class);
+        job.setWorkerContextClass(EmitterWorkerContext.class);
+        Configuration conf = job.getConfiguration();
+        conf.set(SimpleVertexWithWorkerContext.OUTPUTDIR, args[0]);
+        job.setWorkerConfiguration(Integer.parseInt(args[1]),
+                                   Integer.parseInt(args[1]),
+                                   100.0f);
+        if (job.run(true) == true) {
+            return 0;
+        } else {
+            return -1;
+        }
+    }
+
+    public static void main(String[] args) throws Exception {
+        System.exit(ToolRunner.run(new SimpleVertexWithWorkerContext(), args));
+    }
+}
\ No newline at end of file
diff --git a/src/main/java/org/apache/giraph/examples/SumAggregator.java b/src/main/java/org/apache/giraph/examples/SumAggregator.java
new file mode 100644
index 0000000..6536b5e
--- /dev/null
+++ b/src/main/java/org/apache/giraph/examples/SumAggregator.java
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.examples;
+
+import org.apache.hadoop.io.DoubleWritable;
+
+import org.apache.giraph.graph.Aggregator;
+
+/**
+ * Aggregator for summing up values.
+ *
+ */
+public class SumAggregator implements Aggregator<DoubleWritable> {
+
+  private double sum = 0;
+
+  public void aggregate(double value) {
+      sum += value;
+  }
+
+  public void aggregate(DoubleWritable value) {
+      sum += value.get();
+  }
+
+  public void setAggregatedValue(DoubleWritable value) {
+      sum = value.get();
+  }
+
+  public DoubleWritable getAggregatedValue() {
+      return new DoubleWritable(sum);
+  }
+
+  public DoubleWritable createAggregatedValue() {
+      return new DoubleWritable();
+  }
+
+}
diff --git a/src/main/java/org/apache/giraph/examples/VerifyMessage.java b/src/main/java/org/apache/giraph/examples/VerifyMessage.java
new file mode 100644
index 0000000..7553b71
--- /dev/null
+++ b/src/main/java/org/apache/giraph/examples/VerifyMessage.java
@@ -0,0 +1,188 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.examples;
+
+import org.apache.giraph.graph.*;
+import org.apache.hadoop.io.FloatWritable;
+import org.apache.hadoop.io.IntWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Writable;
+import org.apache.log4j.Logger;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Iterator;
+
+/**
+ * An example that simply uses its id, value, and edges to compute new data
+ * every iteration to verify that messages are sent and received at the
+ * appropriate location and superstep.
+ */
+public class VerifyMessage {
+    public static class VerifiableMessage implements Writable {
+        /** Superstep sent on */
+        public long superstep;
+        /** Source vertex id */
+        public long sourceVertexId;
+        /** Value */
+        public float value;
+
+        public VerifiableMessage() {}
+
+        public VerifiableMessage(
+                long superstep, long sourceVertexId, float value) {
+            this.superstep = superstep;
+            this.sourceVertexId = sourceVertexId;
+            this.value = value;
+        }
+
+        @Override
+        public void readFields(DataInput input) throws IOException {
+            superstep = input.readLong();
+            sourceVertexId = input.readLong();
+            value = input.readFloat();
+        }
+
+        @Override
+        public void write(DataOutput output) throws IOException {
+            output.writeLong(superstep);
+            output.writeLong(sourceVertexId);
+            output.writeFloat(value);
+        }
+
+        @Override
+        public String toString() {
+            return "(superstep=" + superstep + ",sourceVertexId=" +
+                sourceVertexId + ",value=" + value + ")";
+        }
+    }
+
+    public static class VerifyMessageVertex extends
+            EdgeListVertex<LongWritable, IntWritable, FloatWritable,
+            VerifiableMessage> {
+        /** User can access this after the application finishes if local */
+        public static long finalSum;
+        /** Number of supersteps to run (6 by default) */
+        private static int supersteps = 6;
+        /** Class logger */
+        private static Logger LOG = Logger.getLogger(VerifyMessageVertex.class);
+
+        /** Dynamically set number of supersteps */
+        public static final String SUPERSTEP_COUNT =
+            "verifyMessageVertex.superstepCount";
+
+        public static class VerifyMessageVertexWorkerContext extends
+                WorkerContext {
+            @Override
+            public void preApplication() throws InstantiationException,
+                    IllegalAccessException {
+                registerAggregator(LongSumAggregator.class.getName(),
+                    LongSumAggregator.class);
+                LongSumAggregator sumAggregator = (LongSumAggregator)
+                    getAggregator(LongSumAggregator.class.getName());
+                sumAggregator.setAggregatedValue(new LongWritable(0));
+                supersteps = getContext().getConfiguration().getInt(
+                    SUPERSTEP_COUNT, supersteps);
+            }
+
+            @Override
+            public void postApplication() {
+                LongSumAggregator sumAggregator = (LongSumAggregator)
+                    getAggregator(LongSumAggregator.class.getName());
+                finalSum = sumAggregator.getAggregatedValue().get();
+            }
+
+            @Override
+            public void preSuperstep() {
+                useAggregator(LongSumAggregator.class.getName());
+            }
+
+            @Override
+            public void postSuperstep() {}
+        }
+
+        @Override
+        public void compute(Iterator<VerifiableMessage> msgIterator) {
+            LongSumAggregator sumAggregator = (LongSumAggregator)
+                getAggregator(LongSumAggregator.class.getName());
+            if (getSuperstep() > supersteps) {
+                voteToHalt();
+                return;
+            }
+            if (LOG.isDebugEnabled()) {
+                LOG.debug("compute: " + sumAggregator);
+            }
+            sumAggregator.aggregate(getVertexId().get());
+            if (LOG.isDebugEnabled()) {
+                LOG.debug("compute: sum = " +
+                          sumAggregator.getAggregatedValue().get() +
+                          " for vertex " + getVertexId());
+            }
+            float msgValue = 0.0f;
+            while (msgIterator.hasNext()) {
+                VerifiableMessage msg = msgIterator.next();
+                msgValue += msg.value;
+                if (LOG.isDebugEnabled()) {
+                    LOG.debug("compute: got msg = " + msg +
+                              " for vertex id " + getVertexId() +
+                              ", vertex value " + getVertexValue() +
+                              " on superstep " + getSuperstep());
+                }
+                if (msg.superstep != getSuperstep() - 1) {
+                    throw new IllegalStateException(
+                        "compute: Impossible to not get a messsage from " +
+                        "the previous superstep, current superstep = " +
+                        getSuperstep());
+                }
+                if ((msg.sourceVertexId != getVertexId().get() - 1) &&
+                        (getVertexId().get() != 0)) {
+                    throw new IllegalStateException(
+                        "compute: Impossible that this message didn't come " +
+                        "from the previous vertex and came from " +
+                        msg.sourceVertexId);
+                }
+            }
+            int vertexValue = getVertexValue().get();
+            setVertexValue(new IntWritable(vertexValue + (int) msgValue));
+            if (LOG.isDebugEnabled()) {
+                LOG.debug("compute: vertex " + getVertexId() +
+                          " has value " + getVertexValue() +
+                          " on superstep " + getSuperstep());
+            }
+            for (LongWritable targetVertexId : this) {
+                FloatWritable edgeValue = getEdgeValue(targetVertexId);
+                if (LOG.isDebugEnabled()) {
+                    LOG.debug("compute: vertex " + getVertexId() +
+                              " sending edgeValue " + edgeValue +
+                              " vertexValue " + vertexValue +
+                              " total " +
+                              (edgeValue.get() + (float) vertexValue) +
+                              " to vertex " + targetVertexId +
+                              " on superstep " + getSuperstep());
+                }
+                edgeValue.set(edgeValue.get() + (float) vertexValue);
+                addEdge(targetVertexId, edgeValue);
+                sendMsg(targetVertexId,
+                    new VerifiableMessage(
+                        getSuperstep(), getVertexId().get(), edgeValue.get()));
+            }
+        }
+    }
+}
diff --git a/src/main/java/org/apache/giraph/examples/VertexWithComponentTextOutputFormat.java b/src/main/java/org/apache/giraph/examples/VertexWithComponentTextOutputFormat.java
new file mode 100644
index 0000000..17e6fa9
--- /dev/null
+++ b/src/main/java/org/apache/giraph/examples/VertexWithComponentTextOutputFormat.java
@@ -0,0 +1,71 @@
+/*
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.giraph.examples;
+
+import org.apache.giraph.graph.BasicVertex;
+import org.apache.giraph.graph.VertexWriter;
+import org.apache.giraph.lib.TextVertexOutputFormat;
+import org.apache.hadoop.io.IntWritable;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.RecordWriter;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+
+import java.io.IOException;
+
+/**
+ * Text-based {@link org.apache.giraph.graph.VertexOutputFormat} for usage with
+ * {@link ConnectedComponentsVertex}
+ *
+ * Each line consists of a vertex and its associated component (represented by the smallest
+ * vertex id in the component)
+ */
+public class VertexWithComponentTextOutputFormat extends
+        TextVertexOutputFormat<IntWritable, IntWritable, NullWritable> {
+
+    @Override
+    public VertexWriter<IntWritable, IntWritable, NullWritable>
+            createVertexWriter(TaskAttemptContext context)
+            throws IOException, InterruptedException {
+        RecordWriter<Text, Text> recordWriter =
+                textOutputFormat.getRecordWriter(context);
+        return new VertexWithComponentWriter(recordWriter);
+    }
+
+    public static class VertexWithComponentWriter extends
+            TextVertexOutputFormat.TextVertexWriter<IntWritable, IntWritable,
+            NullWritable> {
+
+        public VertexWithComponentWriter(RecordWriter<Text, Text> writer) {
+            super(writer);
+        }
+
+        @Override
+        public void writeVertex(BasicVertex<IntWritable, IntWritable,
+                NullWritable,?> vertex) throws IOException,
+                InterruptedException {
+            StringBuilder output = new StringBuilder();
+            output.append(vertex.getVertexId().get());
+            output.append('\t');
+            output.append(vertex.getVertexValue().get());
+            getRecordWriter().write(new Text(output.toString()), null);
+        }
+
+    }
+}
\ No newline at end of file
diff --git a/src/main/java/org/apache/giraph/graph/Aggregator.java b/src/main/java/org/apache/giraph/graph/Aggregator.java
new file mode 100644
index 0000000..3b0bc98
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/Aggregator.java
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import org.apache.hadoop.io.Writable;
+
+/**
+ * Interface for Aggregator.  Allows aggregate operations for all vertices
+ * in a given superstep.
+ *
+ * @param <A extends Writable> Aggregated value
+ */
+public interface Aggregator<A extends Writable> {
+    /**
+     * Add a new value.
+     * Needs to be commutative and associative
+     *
+     * @param value
+     */
+    void aggregate(A value);
+
+    /**
+     * Set aggregated value.
+     * Can be used for initialization or reset.
+     *
+     * @param value
+     */
+    void setAggregatedValue(A value);
+
+    /**
+     * Return current aggregated value.
+     * Needs to be initialized if aggregate or setAggregatedValue
+     * have not been called before.
+     *
+     * @return A
+     */
+    A getAggregatedValue();
+
+    /**
+     * Return new aggregated value.
+     * Must be changeable without affecting internals of Aggregator
+     *
+     * @return Writable
+     */
+    A createAggregatedValue();
+}
diff --git a/src/main/java/org/apache/giraph/graph/AggregatorUsage.java b/src/main/java/org/apache/giraph/graph/AggregatorUsage.java
new file mode 100644
index 0000000..37ce57c
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/AggregatorUsage.java
@@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import org.apache.hadoop.io.Writable;
+
+/**
+ * Vertex classes can use this interface to register and use aggregators
+ */
+public interface AggregatorUsage {
+    /**
+     * Register an aggregator in preSuperstep() and/or preApplication().
+     *
+     * @param name of aggregator
+     * @param aggregatorClass Class type of the aggregator
+     * @return created Aggregator or null when already registered
+     */
+    public <A extends Writable> Aggregator<A> registerAggregator(
+        String name,
+        Class<? extends Aggregator<A>> aggregatorClass)
+        throws InstantiationException, IllegalAccessException;
+
+    /**
+     * Get a registered aggregator.
+     *
+     * @param name Name of aggregator
+     * @return Aggregator<A> (null when not registered)
+     */
+    public Aggregator<? extends Writable> getAggregator(String name);
+
+    /**
+     * Use a registered aggregator in current superstep.
+     * Even when the same aggregator should be used in the next
+     * superstep, useAggregator needs to be called at the beginning
+     * of that superstep in preSuperstep().
+     *
+     * @param name Name of aggregator
+     * @return boolean (false when not registered)
+     */
+    public boolean useAggregator(String name);
+}
diff --git a/src/main/java/org/apache/giraph/graph/AggregatorWriter.java b/src/main/java/org/apache/giraph/graph/AggregatorWriter.java
new file mode 100644
index 0000000..244652d
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/AggregatorWriter.java
@@ -0,0 +1,73 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import java.io.IOException;
+import java.util.Map;
+
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.mapreduce.Mapper.Context;
+
+/**
+ *  An AggregatorWriter is used to export Aggregators during or at the end of 
+ *  each computation. It runs on the master and it's called at the end of each
+ *  superstep. The special signal {@link AggregatorWriter#LAST_SUPERSTEP} is 
+ *  passed to {@link AggregatorWriter#writeAggregator(Map, long)} as the 
+ *  superstep value to signal the end of computation.
+ */
+public interface AggregatorWriter {
+    /** Signal for last superstep */
+    public static final int LAST_SUPERSTEP = -1;
+
+    /**
+     * The method is called at the initialization of the AggregatorWriter.
+     * More precisely, the aggregatorWriter is initialized each time a new
+     * master is elected.
+     * 
+     * @param context Mapper Context where the master is running on
+     * @param applicationAttempt ID of the applicationAttempt, used to
+     *        disambiguate aggregator writes for different attempts
+     * @throws IOException
+     */
+    @SuppressWarnings("rawtypes")
+    void initialize(Context context, long applicationAttempt) throws IOException;
+
+    /**
+     * The method is called at the end of each superstep. The user might decide
+     * whether to write the aggregators values for the current superstep. For 
+     * the last superstep, {@link AggregatorWriter#LAST_SUPERSTEP} is passed.
+     * 
+     * @param map Map of aggregators to write
+     * @param superstep Current superstep
+     * @throws IOException
+     */
+    void writeAggregator(
+            Map<String, Aggregator<Writable>> aggregatorMap, 
+            long superstep) throws IOException;
+
+    /**
+     * The method is called at the end of a successful computation. The method
+     * is not called when the job fails and a new master is elected. For this
+     * reason it's advised to flush data at the end of 
+     * {@link AggregatorWriter#writeAggregator(Map, long)}.
+     * 
+     * @throws IOException
+     */
+    void close() throws IOException;
+}
diff --git a/src/main/java/org/apache/giraph/graph/BasicVertex.java b/src/main/java/org/apache/giraph/graph/BasicVertex.java
new file mode 100644
index 0000000..96efc3e
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/BasicVertex.java
@@ -0,0 +1,276 @@
+ /*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.Mapper;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Map;
+
+ /**
+ * Basic interface for writing a BSP application for computation.
+ *
+ * @param <I> vertex id
+ * @param <V> vertex data
+ * @param <E> edge data
+ * @param <M> message data
+ */
+@SuppressWarnings("rawtypes")
+public abstract class BasicVertex<I extends WritableComparable,
+        V extends Writable, E extends Writable, M extends Writable>
+        implements AggregatorUsage, Iterable<I>, Writable, Configurable {
+    /** Global graph state **/
+    private GraphState<I,V,E,M> graphState;
+    /** Configuration */
+    private Configuration conf;
+    /** If true, do not do anymore computation on this vertex. */
+    boolean halt = false;
+
+    /**
+     * This method must be called after instantiation of a vertex with BspUtils
+     * unless deserialization from readFields() is called.
+     *
+     * @param vertexId Will be the vertex id
+     * @param vertexValue Will be the vertex value
+     * @param edges A map of destination edge ids to edge values (can be null)
+     * @param messages Initial messages for this vertex (can be null)
+     */
+    public abstract void initialize(
+        I vertexId, V vertexValue, Map<I, E> edges, Iterable<M> messages);
+
+    /**
+     * Must be defined by user to do computation on a single Vertex.
+     *
+     * @param msgIterator Iterator to the messages that were sent to this
+     *        vertex in the previous superstep
+     * @throws IOException
+     */
+    public abstract void compute(Iterator<M> msgIterator) throws IOException;
+
+    /**
+     * Retrieves the current superstep.
+     *
+     * @return Current superstep
+     */
+    public long getSuperstep() {
+        return getGraphState().getSuperstep();
+    }
+
+    /**
+     * Get the vertex id
+     */
+    public abstract I getVertexId();
+
+    /**
+     * Get the vertex value (data stored with vertex)
+     *
+     * @return Vertex value
+     */
+    public abstract V getVertexValue();
+
+    /**
+     * Set the vertex data (immediately visible in the computation)
+     *
+     * @param vertexValue Vertex data to be set
+     */
+    public abstract void setVertexValue(V vertexValue);
+
+    /**
+     * Get the total (all workers) number of vertices that
+     * existed in the previous superstep.
+     *
+     * @return Total number of vertices (-1 if first superstep)
+     */
+    public long getNumVertices() {
+        return getGraphState().getNumVertices();
+    }
+
+    /**
+     * Get the total (all workers) number of edges that
+     * existed in the previous superstep.
+     *
+     * @return Total number of edges (-1 if first superstep)
+     */
+    public long getNumEdges() {
+        return getGraphState().getNumEdges();
+    }
+
+    /**
+     * Get a read-only view of the out-edges of this vertex.
+     *
+     * @return the out edges (sort order determined by subclass implementation).
+     */
+    @Override
+    public abstract Iterator<I> iterator();
+
+    /**
+     * Get the edge value associated with a target vertex id.
+     *
+     * @param targetVertexId Target vertex id to check
+     *
+     * @return the value of the edge to targetVertexId (or null if there
+     *         is no edge to it)
+     */
+    public abstract E getEdgeValue(I targetVertexId);
+
+    /**
+     * Does an edge with the target vertex id exist?
+     *
+     * @param targetVertexId Target vertex id to check
+     * @return true if there is an edge to the target
+     */
+    public abstract boolean hasEdge(I targetVertexId);
+
+    /**
+     * Get the number of outgoing edges on this vertex.
+     *
+     * @return the total number of outbound edges from this vertex
+     */
+    public abstract int getNumOutEdges();
+
+    /**
+     * Send a message to a vertex id.  The message should not be mutated after
+     * this method returns or else undefined results could occur.
+     *
+     * @param id Vertex id to send the message to
+     * @param msg Message data to send.  Note that after the message is sent,
+     *        the user should not modify the object.
+     */
+    public void sendMsg(I id, M msg) {
+        if (msg == null) {
+            throw new IllegalArgumentException(
+                "sendMsg: Cannot send null message to " + id);
+        }
+        getGraphState().getWorkerCommunications().
+            sendMessageReq(id, msg);
+    }
+
+    /**
+     * Send a message to all edges.
+     */
+    public abstract void sendMsgToAllEdges(M msg);
+
+    /**
+     * After this is called, the compute() code will no longer be called for
+     * this vertex unless a message is sent to it.  Then the compute() code
+     * will be called once again until this function is called.  The
+     * application finishes only when all vertices vote to halt.
+     */
+    public void voteToHalt() {
+        halt = true;
+    }
+
+    /**
+     * Is this vertex done?
+     */
+    public boolean isHalted() {
+        return halt;
+    }
+
+    /**
+     *  Get the list of incoming messages from the previous superstep.  Same as
+     *  the message iterator passed to compute().
+     */
+    public abstract Iterable<M> getMessages();
+
+    /**
+     * Copy the messages this vertex should process in the current superstep
+     *
+     * @param messages the messages sent to this vertex in the previous superstep
+     */
+    abstract void putMessages(Iterable<M> messages);
+
+    /**
+     * Release unnecessary resources (will be called after vertex returns from
+     * {@link #compute()})
+     */
+    abstract void releaseResources();
+
+    /**
+     * Get the graph state for all workers.
+     *
+     * @return Graph state for all workers
+     */
+    GraphState<I, V, E, M> getGraphState() {
+        return graphState;
+    }
+
+    /**
+     * Set the graph state for all workers
+     *
+     * @param graphState Graph state for all workers
+     */
+    void setGraphState(GraphState<I, V, E, M> graphState) {
+        this.graphState = graphState;
+    }
+
+    /**
+     * Get the mapper context
+     *
+     * @return Mapper context
+     */
+     public Mapper.Context getContext() {
+         return getGraphState().getContext();
+     }
+
+    /**
+     * Get the worker context
+     *
+     * @return WorkerContext context
+     */
+    public WorkerContext getWorkerContext() {
+        return getGraphState().getGraphMapper().getWorkerContext();
+    }
+
+    @Override
+    public final <A extends Writable> Aggregator<A> registerAggregator(
+            String name,
+            Class<? extends Aggregator<A>> aggregatorClass)
+            throws InstantiationException, IllegalAccessException {
+        return getGraphState().getGraphMapper().getAggregatorUsage().
+            registerAggregator(name, aggregatorClass);
+    }
+
+    @Override
+    public final Aggregator<? extends Writable> getAggregator(String name) {
+        return getGraphState().getGraphMapper().getAggregatorUsage().
+            getAggregator(name);
+    }
+
+    @Override
+    public final boolean useAggregator(String name) {
+        return getGraphState().getGraphMapper().getAggregatorUsage().
+            useAggregator(name);
+    }
+
+    @Override
+    public Configuration getConf() {
+        return conf;
+    }
+
+    @Override
+    public void setConf(Configuration conf) {
+        this.conf = conf;
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/BasicVertexResolver.java b/src/main/java/org/apache/giraph/graph/BasicVertexResolver.java
new file mode 100644
index 0000000..7809854
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/BasicVertexResolver.java
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * Handles all the situations that can arise upon creation/removal of
+ * vertices and edges.
+ */
+@SuppressWarnings("rawtypes")
+public interface BasicVertexResolver<
+        I extends WritableComparable,
+        V extends Writable,
+        E extends Writable,
+        M extends Writable> {
+    /**
+     * A vertex may have been removed, created zero or more times and had
+     * zero or more messages sent to it.  This method will handle all situations
+     * excluding the normal case (a vertex already exists and has zero or more
+     * messages sent it to).
+     *
+     * @param vertexId Vertex id (can be used for {@link BasicVertex}'s
+     *        initialize())
+     * @param vertex Original vertex or null if none
+     * @param vertexChanges Changes that happened to this vertex or null if none
+     * @param messages messages received in the last superstep or null if none
+     * @return Vertex to be returned, if null, and a vertex currently exists
+     *         it will be removed
+     */
+    BasicVertex<I, V, E, M> resolve(I vertexId,
+                                    BasicVertex<I, V, E, M> vertex,
+                                    VertexChanges<I, V, E, M> vertexChanges,
+                                    Iterable<M> messages);
+
+    /**
+     * Create a default vertex that can be used to return from resolve().
+     *
+     * @return Newly instantiated vertex.
+     */
+    BasicVertex<I, V, E, M> instantiateVertex();
+}
diff --git a/src/main/java/org/apache/giraph/graph/BspService.java b/src/main/java/org/apache/giraph/graph/BspService.java
new file mode 100644
index 0000000..024286d
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/BspService.java
@@ -0,0 +1,1023 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import org.apache.giraph.bsp.CentralizedService;
+import org.apache.giraph.graph.partition.GraphPartitionerFactory;
+import org.apache.giraph.zk.BspEvent;
+import org.apache.giraph.zk.PredicateLock;
+import org.apache.giraph.zk.ZooKeeperExt;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.log4j.Logger;
+import org.apache.zookeeper.CreateMode;
+import org.apache.zookeeper.KeeperException;
+import org.apache.zookeeper.WatchedEvent;
+import org.apache.zookeeper.Watcher;
+import org.apache.zookeeper.Watcher.Event.EventType;
+import org.apache.zookeeper.Watcher.Event.KeeperState;
+import org.apache.zookeeper.ZooDefs.Ids;
+import org.json.JSONException;
+import org.json.JSONObject;
+
+import java.io.IOException;
+import java.net.InetAddress;
+import java.net.UnknownHostException;
+import java.security.InvalidParameterException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+
+/**
+ * Zookeeper-based implementation of {@link CentralizedService}.
+ */
+@SuppressWarnings("rawtypes")
+public abstract class BspService <
+        I extends WritableComparable,
+        V extends Writable,
+        E extends Writable,
+        M extends Writable>
+        implements Watcher, CentralizedService<I, V, E, M> {
+    /** Private ZooKeeper instance that implements the service */
+    private final ZooKeeperExt zk;
+    /** Has the Connection occurred? */
+    private final BspEvent connectedEvent = new PredicateLock();
+    /** Has worker registration changed (either healthy or unhealthy) */
+    private final BspEvent workerHealthRegistrationChanged =
+        new PredicateLock();
+    /** InputSplits are ready for consumption by workers */
+    private final BspEvent inputSplitsAllReadyChanged =
+        new PredicateLock();
+    /** InputSplit reservation or finished notification and synchronization */
+    private final BspEvent inputSplitsStateChanged =
+        new PredicateLock();
+    /** InputSplits are done being processed by workers */
+    private final BspEvent inputSplitsAllDoneChanged =
+        new PredicateLock();
+    /** InputSplit done by a worker finished notification and synchronization */
+    private final BspEvent inputSplitsDoneStateChanged =
+        new PredicateLock();
+    /** Are the partition assignments to workers ready? */
+    private final BspEvent partitionAssignmentsReadyChanged =
+        new PredicateLock();
+
+    /** Application attempt changed */
+    private final BspEvent applicationAttemptChanged =
+        new PredicateLock();
+    /** Superstep finished synchronization */
+    private final BspEvent superstepFinished =
+        new PredicateLock();
+    /** Master election changed for any waited on attempt */
+    private final BspEvent masterElectionChildrenChanged =
+        new PredicateLock();
+    /** Cleaned up directory children changed*/
+    private final BspEvent cleanedUpChildrenChanged =
+        new PredicateLock();
+    /** Registered list of BspEvents */
+    private final List<BspEvent> registeredBspEvents =
+        new ArrayList<BspEvent>();
+    /** Configuration of the job*/
+    private final Configuration conf;
+    /** Job context (mainly for progress) */
+    private final Mapper<?, ?, ?, ?>.Context context;
+    /** Cached superstep (from ZooKeeper) */
+    private long cachedSuperstep = UNSET_SUPERSTEP;
+    /** Restarted from a checkpoint (manual or automatic) */
+    private long restartedSuperstep = UNSET_SUPERSTEP;
+    /** Cached application attempt (from ZooKeeper) */
+    private long cachedApplicationAttempt = UNSET_APPLICATION_ATTEMPT;
+    /** Job id, to ensure uniqueness */
+    private final String jobId;
+    /** Task partition, to ensure uniqueness */
+    private final int taskPartition;
+    /** My hostname */
+    private final String hostname;
+    /** Combination of hostname '_' partition (unique id) */
+    private final String hostnamePartitionId;
+    /** Graph partitioner */
+    private final GraphPartitionerFactory<I, V, E, M> graphPartitionerFactory;
+    /** Mapper that will do the graph computation */
+    private final GraphMapper<I, V, E, M> graphMapper;
+    /** Class logger */
+    private static final Logger LOG = Logger.getLogger(BspService.class);
+    /** File system */
+    private final FileSystem fs;
+    /** Checkpoint frequency */
+    private int checkpointFrequency = -1;
+    /** Map of aggregators */
+    private Map<String, Aggregator<Writable>> aggregatorMap =
+        new TreeMap<String, Aggregator<Writable>>();
+
+    /** Unset superstep */
+    public static final long UNSET_SUPERSTEP = Long.MIN_VALUE;
+    /** Input superstep (superstep when loading the vertices happens) */
+    public static final long INPUT_SUPERSTEP = -1;
+    /** Unset application attempt */
+    public static final long UNSET_APPLICATION_ATTEMPT = Long.MIN_VALUE;
+
+    public static final String BASE_DIR = "/_hadoopBsp";
+    public static final String MASTER_JOB_STATE_NODE = "/_masterJobState";
+    public static final String INPUT_SPLIT_DIR = "/_inputSplitDir";
+    public static final String INPUT_SPLIT_DONE_DIR = "/_inputSplitDoneDir";
+    public static final String INPUT_SPLIT_RESERVED_NODE =
+        "/_inputSplitReserved";
+    public static final String INPUT_SPLIT_FINISHED_NODE =
+        "/_inputSplitFinished";
+    public static final String INPUT_SPLITS_ALL_READY_NODE =
+        "/_inputSplitsAllReady";
+    public static final String INPUT_SPLITS_ALL_DONE_NODE =
+        "/_inputSplitsAllDone";
+    public static final String APPLICATION_ATTEMPTS_DIR =
+        "/_applicationAttemptsDir";
+    public static final String MASTER_ELECTION_DIR = "/_masterElectionDir";
+    public static final String SUPERSTEP_DIR = "/_superstepDir";
+    public static final String MERGED_AGGREGATOR_DIR =
+        "/_mergedAggregatorDir";
+    public static final String WORKER_HEALTHY_DIR = "/_workerHealthyDir";
+    public static final String WORKER_UNHEALTHY_DIR = "/_workerUnhealthyDir";
+    public static final String WORKER_FINISHED_DIR = "/_workerFinishedDir";
+    public static final String PARTITION_ASSIGNMENTS_DIR =
+        "/_partitionAssignments";
+    public static final String PARTITION_EXCHANGE_DIR =
+        "/_partitionExchangeDir";
+    public static final String SUPERSTEP_FINISHED_NODE = "/_superstepFinished";
+    public static final String CLEANED_UP_DIR = "/_cleanedUpDir";
+
+    public static final String JSONOBJ_AGGREGATOR_VALUE_ARRAY_KEY =
+        "_aggregatorValueArrayKey";
+    public static final String JSONOBJ_PARTITION_STATS_KEY =
+            "_partitionStatsKey";
+    public static final String JSONOBJ_FINISHED_VERTICES_KEY =
+        "_verticesFinishedKey";
+    public static final String JSONOBJ_NUM_VERTICES_KEY = "_numVerticesKey";
+    public static final String JSONOBJ_NUM_EDGES_KEY = "_numEdgesKey";
+    public static final String JSONOBJ_NUM_MESSAGES_KEY = "_numMsgsKey";
+    public static final String JSONOBJ_HOSTNAME_ID_KEY = "_hostnameIdKey";
+    public static final String JSONOBJ_MAX_VERTEX_INDEX_KEY =
+        "_maxVertexIndexKey";
+    public static final String JSONOBJ_HOSTNAME_KEY = "_hostnameKey";
+    public static final String JSONOBJ_PORT_KEY = "_portKey";
+    public static final String JSONOBJ_CHECKPOINT_FILE_PREFIX_KEY =
+        "_checkpointFilePrefixKey";
+    public static final String JSONOBJ_PREVIOUS_HOSTNAME_KEY =
+        "_previousHostnameKey";
+    public static final String JSONOBJ_PREVIOUS_PORT_KEY = "_previousPortKey";
+    public static final String JSONOBJ_STATE_KEY = "_stateKey";
+    public static final String JSONOBJ_APPLICATION_ATTEMPT_KEY =
+        "_applicationAttemptKey";
+    public static final String JSONOBJ_SUPERSTEP_KEY =
+        "_superstepKey";
+    public static final String AGGREGATOR_NAME_KEY = "_aggregatorNameKey";
+    public static final String AGGREGATOR_CLASS_NAME_KEY =
+        "_aggregatorClassNameKey";
+    public static final String AGGREGATOR_VALUE_KEY = "_aggregatorValueKey";
+
+    public static final String WORKER_SUFFIX = "_worker";
+    public static final String MASTER_SUFFIX = "_master";
+
+    /** Path to the job's root */
+    public final String BASE_PATH;
+    /** Path to the job state determined by the master (informative only) */
+    public final String MASTER_JOB_STATE_PATH;
+    /** Path to the input splits written by the master */
+    public final String INPUT_SPLIT_PATH;
+    /** Path to the input splits all ready to be processed by workers */
+    public final String INPUT_SPLITS_ALL_READY_PATH;
+    /** Path to the input splits done */
+    public final String INPUT_SPLIT_DONE_PATH;
+    /** Path to the input splits all done to notify the workers to proceed */
+    public final String INPUT_SPLITS_ALL_DONE_PATH;
+    /** Path to the application attempts) */
+    public final String APPLICATION_ATTEMPTS_PATH;
+    /** Path to the cleaned up notifications */
+    public final String CLEANED_UP_PATH;
+    /** Path to the checkpoint's root (including job id) */
+    public final String CHECKPOINT_BASE_PATH;
+    /** Path to the master election path */
+    public final String MASTER_ELECTION_PATH;
+
+    /**
+     * Get the superstep from a ZooKeeper path
+     *
+     * @param path Path to parse for the superstep
+     */
+    public static long getSuperstepFromPath(String path) {
+        int foundSuperstepStart = path.indexOf(SUPERSTEP_DIR);
+        if (foundSuperstepStart == -1) {
+            throw new IllegalArgumentException(
+                "getSuperstepFromPath: Cannot find " + SUPERSTEP_DIR +
+                "from " + path);
+        }
+        foundSuperstepStart += SUPERSTEP_DIR.length() + 1;
+        int endIndex = foundSuperstepStart +
+            path.substring(foundSuperstepStart).indexOf("/");
+        if (endIndex == -1) {
+            throw new IllegalArgumentException(
+                "getSuperstepFromPath: Cannot find end of superstep from " +
+                path);
+        }
+        if (LOG.isDebugEnabled()) {
+            LOG.debug("getSuperstepFromPath: Got path=" + path +
+                      ", start=" + foundSuperstepStart + ", end=" + endIndex);
+        }
+        return Long.parseLong(path.substring(foundSuperstepStart, endIndex));
+    }
+
+    /**
+     * Get the hostname and id from a "healthy" worker path
+     */
+    public static String getHealthyHostnameIdFromPath(String path) {
+        int foundWorkerHealthyStart = path.indexOf(WORKER_HEALTHY_DIR);
+        if (foundWorkerHealthyStart == -1) {
+            throw new IllegalArgumentException(
+                "getHealthyHostnameidFromPath: Couldn't find " +
+                WORKER_HEALTHY_DIR + " from " + path);
+        }
+        foundWorkerHealthyStart += WORKER_HEALTHY_DIR.length();
+        return path.substring(foundWorkerHealthyStart);
+    }
+
+    /**
+     * Generate the base superstep directory path for a given application
+     * attempt
+     *
+     * @param attempt application attempt number
+     * @return directory path based on the an attempt
+     */
+    final public String getSuperstepPath(long attempt) {
+        return APPLICATION_ATTEMPTS_PATH + "/" + attempt + SUPERSTEP_DIR;
+    }
+
+    /**
+     * Generate the worker information "healthy" directory path for a
+     * superstep
+     *
+     * @param attempt application attempt number
+     * @param superstep superstep to use
+     * @return directory path based on the a superstep
+     */
+    final public String getWorkerInfoHealthyPath(long attempt,
+                                                 long superstep) {
+        return APPLICATION_ATTEMPTS_PATH + "/" + attempt +
+            SUPERSTEP_DIR + "/" + superstep + WORKER_HEALTHY_DIR;
+    }
+
+    /**
+     * Generate the worker information "unhealthy" directory path for a
+     * superstep
+     *
+     * @param attempt application attempt number
+     * @param superstep superstep to use
+     * @return directory path based on the a superstep
+     */
+    final public String getWorkerInfoUnhealthyPath(long attempt,
+                                                   long superstep) {
+        return APPLICATION_ATTEMPTS_PATH + "/" + attempt +
+            SUPERSTEP_DIR + "/" + superstep + WORKER_UNHEALTHY_DIR;
+    }
+
+    /**
+     * Generate the worker "finished" directory path for a
+     * superstep
+     *
+     * @param attempt application attempt number
+     * @param superstep superstep to use
+     * @return directory path based on the a superstep
+     */
+    final public String getWorkerFinishedPath(long attempt, long superstep) {
+        return APPLICATION_ATTEMPTS_PATH + "/" + attempt +
+            SUPERSTEP_DIR + "/" + superstep + WORKER_FINISHED_DIR;
+    }
+
+    /**
+     * Generate the "partiton assignments" directory path for a superstep
+     *
+     * @param attempt application attempt number
+     * @param superstep superstep to use
+     * @return directory path based on the a superstep
+     */
+    final public String getPartitionAssignmentsPath(long attempt,
+                                                    long superstep) {
+        return APPLICATION_ATTEMPTS_PATH + "/" + attempt +
+            SUPERSTEP_DIR + "/" + superstep + PARTITION_ASSIGNMENTS_DIR;
+    }
+
+    /**
+     * Generate the "partition exchange" directory path for a superstep
+     *
+     * @param attempt application attempt number
+     * @param superstep superstep to use
+     * @return directory path based on the a superstep
+     */
+    final public String getPartitionExchangePath(long attempt,
+                                                 long superstep) {
+        return APPLICATION_ATTEMPTS_PATH + "/" + attempt +
+            SUPERSTEP_DIR + "/" + superstep + PARTITION_EXCHANGE_DIR;
+    }
+
+    final public String getPartitionExchangeWorkerPath(long attempt,
+                                                       long superstep,
+                                                       WorkerInfo workerInfo) {
+        return getPartitionExchangePath(attempt, superstep) +
+            "/" + workerInfo.getHostnameId();
+    }
+
+    /**
+     * Generate the merged aggregator directory path for a superstep
+     *
+     * @param attempt application attempt number
+     * @param superstep superstep to use
+     * @return directory path based on the a superstep
+     */
+    final public String getMergedAggregatorPath(long attempt, long superstep) {
+        return APPLICATION_ATTEMPTS_PATH + "/" + attempt +
+            SUPERSTEP_DIR + "/" + superstep + MERGED_AGGREGATOR_DIR;
+    }
+
+    /**
+     * Generate the "superstep finished" directory path for a superstep
+     *
+     * @param attempt application attempt number
+     * @param superstep superstep to use
+     * @return directory path based on the a superstep
+     */
+    final public String getSuperstepFinishedPath(long attempt, long superstep) {
+        return APPLICATION_ATTEMPTS_PATH + "/" + attempt +
+            SUPERSTEP_DIR + "/" + superstep + SUPERSTEP_FINISHED_NODE;
+    }
+
+    /**
+     * Generate the base superstep directory path for a given application
+     * attempt
+     *
+     * @param superstep Superstep to use
+     * @return Directory path based on the a superstep
+     */
+    final public String getCheckpointBasePath(long superstep) {
+        return CHECKPOINT_BASE_PATH + "/" + superstep;
+    }
+
+    /** If at the end of a checkpoint file, indicates metadata */
+    public final String CHECKPOINT_METADATA_POSTFIX = ".metadata";
+
+    /**
+     * If at the end of a checkpoint file, indicates vertices, edges,
+     * messages, etc.
+     */
+    public final String CHECKPOINT_VERTICES_POSTFIX = ".vertices";
+
+    /**
+     * If at the end of a checkpoint file, indicates metadata and data is valid
+     * for the same filenames without .valid
+     */
+    public final String CHECKPOINT_VALID_POSTFIX = ".valid";
+
+    /**
+     * If at the end of a checkpoint file, indicates the stitched checkpoint
+     * file prefixes.  A checkpoint is not valid if this file does not exist.
+     */
+    public static final String CHECKPOINT_FINALIZED_POSTFIX = ".finalized";
+
+    /**
+     * Get the checkpoint from a finalized checkpoint path
+     *
+     * @param finalizedPath Path of the finalized checkpoint
+     * @return Superstep referring to a checkpoint of the finalized path
+     */
+    public static long getCheckpoint(Path finalizedPath) {
+        if (!finalizedPath.getName().endsWith(CHECKPOINT_FINALIZED_POSTFIX)) {
+            throw new InvalidParameterException(
+                "getCheckpoint: " + finalizedPath + "Doesn't end in " +
+                CHECKPOINT_FINALIZED_POSTFIX);
+        }
+        String checkpointString =
+            finalizedPath.getName().replace(CHECKPOINT_FINALIZED_POSTFIX, "");
+        return Long.parseLong(checkpointString);
+    }
+
+    /**
+     * Get the ZooKeeperExt instance.
+     *
+     * @return ZooKeeperExt instance.
+     */
+    final public ZooKeeperExt getZkExt() {
+        return zk;
+    }
+
+    @Override
+    final public long getRestartedSuperstep() {
+        return restartedSuperstep;
+    }
+
+    /**
+     * Set the restarted superstep
+     *
+     * @param superstep Set the manually restarted superstep
+     */
+    final public void setRestartedSuperstep(long superstep) {
+        if (superstep < INPUT_SUPERSTEP) {
+            throw new IllegalArgumentException(
+                "setRestartedSuperstep: Bad argument " + superstep);
+        }
+        restartedSuperstep = superstep;
+    }
+
+    /**
+     * Should checkpoint on this superstep?  If checkpointing, always
+     * checkpoint the first user superstep.  If restarting, the first
+     * checkpoint is after the frequency has been met.
+     *
+     * @param superstep Decide if checkpointing no this superstep
+     * @return True if this superstep should be checkpointed, false otherwise
+     */
+    final public boolean checkpointFrequencyMet(long superstep) {
+        if (checkpointFrequency == 0) {
+            return false;
+        }
+        long firstCheckpoint = INPUT_SUPERSTEP + 1;
+        if (getRestartedSuperstep() != UNSET_SUPERSTEP) {
+            firstCheckpoint = getRestartedSuperstep() + checkpointFrequency;
+        }
+        if (superstep < firstCheckpoint) {
+            return false;
+        } else if (((superstep - firstCheckpoint) % checkpointFrequency) == 0) {
+            return true;
+        } else {
+            return false;
+        }
+    }
+
+    /**
+     * Get the file system
+     *
+     * @return file system
+     */
+    final public FileSystem getFs() {
+        return fs;
+    }
+
+    final public Configuration getConfiguration() {
+        return conf;
+    }
+
+    final public Mapper<?, ?, ?, ?>.Context getContext() {
+        return context;
+    }
+
+    final public String getHostname() {
+        return hostname;
+    }
+
+    final public String getHostnamePartitionId() {
+        return hostnamePartitionId;
+    }
+
+    final public int getTaskPartition() {
+        return taskPartition;
+    }
+
+    final public GraphMapper<I, V, E, M> getGraphMapper() {
+        return graphMapper;
+    }
+
+    final public BspEvent getWorkerHealthRegistrationChangedEvent() {
+        return workerHealthRegistrationChanged;
+    }
+
+    final public BspEvent getInputSplitsAllReadyEvent() {
+        return inputSplitsAllReadyChanged;
+    }
+
+    final public BspEvent getInputSplitsStateChangedEvent() {
+        return inputSplitsStateChanged;
+    }
+
+    final public BspEvent getInputSplitsAllDoneEvent() {
+        return inputSplitsAllDoneChanged;
+    }
+
+    final public BspEvent getInputSplitsDoneStateChangedEvent() {
+        return inputSplitsDoneStateChanged;
+    }
+
+    final public BspEvent getPartitionAssignmentsReadyChangedEvent() {
+        return partitionAssignmentsReadyChanged;
+    }
+
+
+    final public BspEvent getApplicationAttemptChangedEvent() {
+        return applicationAttemptChanged;
+    }
+
+    final public BspEvent getSuperstepFinishedEvent() {
+        return superstepFinished;
+    }
+
+
+    final public BspEvent getMasterElectionChildrenChangedEvent() {
+        return masterElectionChildrenChanged;
+    }
+
+    final public BspEvent getCleanedUpChildrenChangedEvent() {
+        return cleanedUpChildrenChanged;
+    }
+
+    /**
+     * Get the master commanded job state as a JSONObject.  Also sets the
+     * watches to see if the master commanded job state changes.
+     *
+     * @return Last job state or null if none
+     */
+    final public JSONObject getJobState() {
+        try {
+            getZkExt().createExt(MASTER_JOB_STATE_PATH,
+                                 null,
+                                 Ids.OPEN_ACL_UNSAFE,
+                                 CreateMode.PERSISTENT,
+                                 true);
+        } catch (KeeperException.NodeExistsException e) {
+            LOG.info("getJobState: Job state already exists (" +
+                     MASTER_JOB_STATE_PATH + ")");
+        } catch (Exception e) {
+            throw new RuntimeException(e);
+        }
+        String jobState = null;
+        try {
+            List<String> childList =
+                getZkExt().getChildrenExt(
+                    MASTER_JOB_STATE_PATH, true, true, true);
+            if (childList.isEmpty()) {
+                return null;
+            }
+            jobState =
+                new String(getZkExt().getData(
+                    childList.get(childList.size() - 1), true, null));
+        } catch (KeeperException.NoNodeException e) {
+            LOG.info("getJobState: Job state path is empty! - " +
+                     MASTER_JOB_STATE_PATH);
+        } catch (Exception e) {
+            throw new RuntimeException(e);
+        }
+        try {
+            return new JSONObject(jobState);
+        } catch (JSONException e) {
+            throw new RuntimeException(
+                "getJobState: Failed to parse job state " + jobState);
+        }
+    }
+
+    public BspService(String serverPortList,
+                      int sessionMsecTimeout,
+                      Mapper<?, ?, ?, ?>.Context context,
+                      GraphMapper<I, V, E, M> graphMapper) {
+        registerBspEvent(connectedEvent);
+        registerBspEvent(workerHealthRegistrationChanged);
+        registerBspEvent(inputSplitsAllReadyChanged);
+        registerBspEvent(inputSplitsStateChanged);
+        registerBspEvent(partitionAssignmentsReadyChanged);
+        registerBspEvent(applicationAttemptChanged);
+        registerBspEvent(superstepFinished);
+        registerBspEvent(masterElectionChildrenChanged);
+        registerBspEvent(cleanedUpChildrenChanged);
+
+        this.context = context;
+        this.graphMapper = graphMapper;
+        this.conf = context.getConfiguration();
+        this.jobId = conf.get("mapred.job.id", "Unknown Job");
+        this.taskPartition = conf.getInt("mapred.task.partition", -1);
+        this.restartedSuperstep = conf.getLong(GiraphJob.RESTART_SUPERSTEP,
+                                               UNSET_SUPERSTEP);
+        this.cachedSuperstep = restartedSuperstep;
+        if ((restartedSuperstep != UNSET_SUPERSTEP) &&
+                (restartedSuperstep < 0)) {
+            throw new IllegalArgumentException(
+                "BspService: Invalid superstep to restart - " +
+                restartedSuperstep);
+        }
+        try {
+            this.hostname = InetAddress.getLocalHost().getHostName();
+        } catch (UnknownHostException e) {
+            throw new RuntimeException(e);
+        }
+        this.hostnamePartitionId = hostname + "_" + getTaskPartition();
+        this.graphPartitionerFactory =
+            BspUtils.<I, V, E, M>createGraphPartitioner(conf);
+
+        this.checkpointFrequency =
+            conf.getInt(GiraphJob.CHECKPOINT_FREQUENCY,
+                          GiraphJob.CHECKPOINT_FREQUENCY_DEFAULT);
+
+        BASE_PATH = BASE_DIR + "/" + jobId;
+        MASTER_JOB_STATE_PATH = BASE_PATH + MASTER_JOB_STATE_NODE;
+        INPUT_SPLIT_PATH = BASE_PATH + INPUT_SPLIT_DIR;
+        INPUT_SPLITS_ALL_READY_PATH = BASE_PATH + INPUT_SPLITS_ALL_READY_NODE;
+        INPUT_SPLIT_DONE_PATH = BASE_PATH + INPUT_SPLIT_DONE_DIR;
+        INPUT_SPLITS_ALL_DONE_PATH = BASE_PATH + INPUT_SPLITS_ALL_DONE_NODE;
+        APPLICATION_ATTEMPTS_PATH = BASE_PATH + APPLICATION_ATTEMPTS_DIR;
+        CLEANED_UP_PATH = BASE_PATH + CLEANED_UP_DIR;
+        CHECKPOINT_BASE_PATH =
+            getConfiguration().get(
+                GiraphJob.CHECKPOINT_DIRECTORY,
+                GiraphJob.CHECKPOINT_DIRECTORY_DEFAULT + "/" + getJobId());
+        MASTER_ELECTION_PATH = BASE_PATH + MASTER_ELECTION_DIR;
+        if (LOG.isInfoEnabled()) {
+            LOG.info("BspService: Connecting to ZooKeeper with job " + jobId +
+                     ", " + getTaskPartition() + " on " + serverPortList);
+        }
+        try {
+            this.zk = new ZooKeeperExt(serverPortList, sessionMsecTimeout, this);
+            connectedEvent.waitForever();
+            this.fs = FileSystem.get(getConfiguration());
+        } catch (IOException e) {
+            throw new RuntimeException(e);
+        }
+    }
+
+    /**
+     * Get the job id
+     *
+     * @return job id
+     */
+    final public String getJobId() {
+        return jobId;
+    }
+
+    /**
+     * Get the latest application attempt and cache it.
+     *
+     * @return the latest application attempt
+     */
+    final public long getApplicationAttempt() {
+        if (cachedApplicationAttempt != UNSET_APPLICATION_ATTEMPT) {
+            return cachedApplicationAttempt;
+        }
+        try {
+            getZkExt().createExt(APPLICATION_ATTEMPTS_PATH,
+                                 null,
+                                 Ids.OPEN_ACL_UNSAFE,
+                                 CreateMode.PERSISTENT,
+                                 true);
+        } catch (KeeperException.NodeExistsException e) {
+            LOG.info("getApplicationAttempt: Node " +
+                     APPLICATION_ATTEMPTS_PATH + " already exists!");
+        } catch (Exception e) {
+            throw new RuntimeException(e);
+        }
+        try {
+            List<String> attemptList =
+                getZkExt().getChildrenExt(
+                    APPLICATION_ATTEMPTS_PATH, true, false, false);
+            if (attemptList.isEmpty()) {
+                cachedApplicationAttempt = 0;
+            }
+            else {
+                cachedApplicationAttempt =
+                    Long.parseLong(Collections.max(attemptList));
+            }
+        } catch (Exception e) {
+            throw new RuntimeException(e);
+        }
+
+        return cachedApplicationAttempt;
+    }
+
+    /**
+     * Get the latest superstep and cache it.
+     *
+     * @return the latest superstep
+     * @throws InterruptedException
+     * @throws KeeperException
+     */
+    final public long getSuperstep() {
+        if (cachedSuperstep != UNSET_SUPERSTEP) {
+            return cachedSuperstep;
+        }
+        String superstepPath = getSuperstepPath(getApplicationAttempt());
+        try {
+            getZkExt().createExt(superstepPath,
+                                 null,
+                                 Ids.OPEN_ACL_UNSAFE,
+                                 CreateMode.PERSISTENT,
+                                 true);
+        } catch (KeeperException.NodeExistsException e) {
+            if (LOG.isInfoEnabled()) {
+                LOG.info("getApplicationAttempt: Node " +
+                         APPLICATION_ATTEMPTS_PATH + " already exists!");
+            }
+        } catch (KeeperException e) {
+            throw new IllegalStateException(
+                "getSuperstep: KeeperException", e);
+        } catch (InterruptedException e) {
+            throw new IllegalStateException(
+                "getSuperstep: InterruptedException", e);
+        }
+
+        List<String> superstepList;
+        try {
+            superstepList =
+                getZkExt().getChildrenExt(superstepPath, true, false, false);
+        } catch (KeeperException e) {
+            throw new IllegalStateException(
+                "getSuperstep: KeeperException", e);
+        } catch (InterruptedException e) {
+            throw new IllegalStateException(
+                "getSuperstep: InterruptedException", e);
+        }
+        if (superstepList.isEmpty()) {
+            cachedSuperstep = INPUT_SUPERSTEP;
+        }
+        else {
+            cachedSuperstep =
+                Long.parseLong(Collections.max(superstepList));
+        }
+
+        return cachedSuperstep;
+    }
+
+    /**
+     * Increment the cached superstep.  Shouldn't be the initial value anymore.
+     */
+    final public void incrCachedSuperstep() {
+        if (cachedSuperstep == UNSET_SUPERSTEP) {
+            throw new IllegalStateException(
+                "incrSuperstep: Invalid unset cached superstep " +
+                UNSET_SUPERSTEP);
+        }
+        ++cachedSuperstep;
+    }
+
+    /**
+     * Set the cached superstep (should only be used for loading checkpoints
+     * or recovering from failure).
+     *
+     * @param superstep will be used as the next superstep iteration
+     */
+    final public void setCachedSuperstep(long superstep) {
+        cachedSuperstep = superstep;
+    }
+
+    /**
+     * Set the cached application attempt (should only be used for restart from
+     * failure by the master)
+     *
+     * @param applicationAttempt Will denote the new application attempt
+     */
+    final public void setApplicationAttempt(long applicationAttempt) {
+        cachedApplicationAttempt = applicationAttempt;
+        String superstepPath = getSuperstepPath(cachedApplicationAttempt);
+        try {
+            getZkExt().createExt(superstepPath,
+                                 null,
+                                 Ids.OPEN_ACL_UNSAFE,
+                                 CreateMode.PERSISTENT,
+                                 true);
+        } catch (KeeperException.NodeExistsException e) {
+            throw new IllegalArgumentException(
+                "setApplicationAttempt: Attempt already exists! - " +
+                superstepPath, e);
+        } catch (KeeperException e) {
+            throw new RuntimeException(
+                "setApplicationAttempt: KeeperException - " +
+                superstepPath, e);
+        } catch (InterruptedException e) {
+            throw new RuntimeException(
+                "setApplicationAttempt: InterruptedException - " +
+                superstepPath, e);
+        }
+    }
+
+    /**
+     * Register an aggregator with name.
+     *
+     * @param name Name of the aggregator
+     * @param aggregatorClass Class of the aggregator
+     * @return Aggregator
+     * @throws IllegalAccessException
+     * @throws InstantiationException
+     */
+    public final <A extends Writable> Aggregator<A> registerAggregator(
+            String name,
+            Class<? extends Aggregator<A>> aggregatorClass)
+            throws InstantiationException, IllegalAccessException {
+        if (aggregatorMap.get(name) != null) {
+            return null;
+        }
+        Aggregator<A> aggregator =
+            (Aggregator<A>) aggregatorClass.newInstance();
+        @SuppressWarnings("unchecked")
+        Aggregator<Writable> writableAggregator =
+            (Aggregator<Writable>) aggregator;
+        aggregatorMap.put(name, writableAggregator);
+        if (LOG.isInfoEnabled()) {
+            LOG.info("registerAggregator: registered " + name);
+        }
+        return aggregator;
+    }
+
+    /**
+     * Get aggregator by name.
+     *
+     * @param name
+     * @return Aggregator<A> (null when not registered)
+     */
+    public final Aggregator<? extends Writable> getAggregator(String name) {
+        return aggregatorMap.get(name);
+    }
+
+    /**
+     * Get the aggregator map.
+     */
+    public Map<String, Aggregator<Writable>> getAggregatorMap() {
+        return aggregatorMap;
+    }
+
+    /**
+     * Register a BspEvent.  Ensure that it will be signaled
+     * by catastrophic failure so that threads waiting on an event signal
+     * will be unblocked.
+     */
+    public void registerBspEvent(BspEvent event) {
+        registeredBspEvents.add(event);
+    }
+
+    /**
+     * Subclasses can use this to instantiate their respective partitioners
+     *
+     * @return Instantiated graph partitioner factory
+     */
+    protected GraphPartitionerFactory<I, V, E, M> getGraphPartitionerFactory() {
+        return graphPartitionerFactory;
+    }
+
+    /**
+     * Derived classes that want additional ZooKeeper events to take action
+     * should override this.
+     *
+     * @param event Event that occurred
+     * @return true if the event was processed here, false otherwise
+     */
+    protected boolean processEvent(WatchedEvent event) {
+        return false;
+    }
+
+    @Override
+    final public void process(WatchedEvent event) {
+        // 1. Process all shared events
+        // 2. Process specific derived class events
+
+        if (LOG.isDebugEnabled()) {
+            LOG.debug("process: Got a new event, path = " + event.getPath() +
+                      ", type = " + event.getType() + ", state = " +
+                      event.getState());
+        }
+
+        if ((event.getPath() == null) && (event.getType() == EventType.None)) {
+            if (event.getState() == KeeperState.Disconnected) {
+                // No way to recover from a disconnect event, signal all BspEvents
+                for (BspEvent bspEvent : registeredBspEvents) {
+                    bspEvent.signal();
+                }
+                throw new RuntimeException(
+                    "process: Disconnected from ZooKeeper, cannot recover - " +
+                    event);
+            } else if (event.getState() == KeeperState.SyncConnected) {
+                if (LOG.isInfoEnabled()) {
+                    LOG.info("process: Asynchronous connection complete.");
+                }
+                connectedEvent.signal();
+            } else {
+                LOG.warn("process: Got unknown null path event " + event);
+            }
+            return;
+        }
+
+        boolean eventProcessed = false;
+        if (event.getPath().startsWith(MASTER_JOB_STATE_PATH)) {
+            // This will cause all becomeMaster() MasterThreads to notice the
+            // change in job state and quit trying to become the master.
+            masterElectionChildrenChanged.signal();
+            eventProcessed = true;
+        } else if ((event.getPath().contains(WORKER_HEALTHY_DIR) ||
+                event.getPath().contains(WORKER_UNHEALTHY_DIR)) &&
+                (event.getType() == EventType.NodeChildrenChanged)) {
+            if (LOG.isDebugEnabled()) {
+                LOG.debug("process: workerHealthRegistrationChanged " +
+                          "(worker health reported - healthy/unhealthy )");
+            }
+            workerHealthRegistrationChanged.signal();
+            eventProcessed = true;
+        } else if (event.getPath().equals(INPUT_SPLITS_ALL_READY_PATH) &&
+                (event.getType() == EventType.NodeCreated)) {
+            if (LOG.isInfoEnabled()) {
+                LOG.info("process: inputSplitsReadyChanged " +
+                         "(input splits ready)");
+            }
+            inputSplitsAllReadyChanged.signal();
+            eventProcessed = true;
+        } else if (event.getPath().endsWith(INPUT_SPLIT_RESERVED_NODE) &&
+                (event.getType() == EventType.NodeCreated)) {
+            if (LOG.isDebugEnabled()) {
+                LOG.debug("process: inputSplitsStateChanged "+
+                          "(made a reservation)");
+            }
+            inputSplitsStateChanged.signal();
+            eventProcessed = true;
+        } else if (event.getPath().endsWith(INPUT_SPLIT_RESERVED_NODE) &&
+                (event.getType() == EventType.NodeDeleted)) {
+            if (LOG.isInfoEnabled()) {
+                LOG.info("process: inputSplitsStateChanged "+
+                         "(lost a reservation)");
+            }
+            inputSplitsStateChanged.signal();
+            eventProcessed = true;
+        } else if (event.getPath().endsWith(INPUT_SPLIT_FINISHED_NODE) &&
+                (event.getType() == EventType.NodeCreated)) {
+            if (LOG.isDebugEnabled()) {
+                LOG.debug("process: inputSplitsStateChanged " +
+                          "(finished inputsplit)");
+            }
+            inputSplitsStateChanged.signal();
+            eventProcessed = true;
+        } else if (event.getPath().endsWith(INPUT_SPLIT_DONE_DIR) &&
+                (event.getType() == EventType.NodeChildrenChanged)) {
+            if (LOG.isDebugEnabled()) {
+                LOG.debug("process: inputSplitsDoneStateChanged " +
+                          "(worker finished sending)");
+            }
+            inputSplitsDoneStateChanged.signal();
+            eventProcessed = true;
+        }  else if (event.getPath().equals(INPUT_SPLITS_ALL_DONE_PATH) &&
+                (event.getType() == EventType.NodeCreated)) {
+            if (LOG.isInfoEnabled()) {
+                LOG.info("process: inputSplitsAllDoneChanged " +
+                         "(all vertices sent from input splits)");
+            }
+            inputSplitsAllDoneChanged.signal();
+            eventProcessed = true;
+        } else if (event.getPath().contains(PARTITION_ASSIGNMENTS_DIR) &&
+                event.getType() == EventType.NodeCreated) {
+            if (LOG.isInfoEnabled()) {
+                LOG.info("process: partitionAssignmentsReadyChanged " +
+                         "(partitions are assigned)");
+            }
+            partitionAssignmentsReadyChanged.signal();
+            eventProcessed = true;
+        } else if (event.getPath().contains(SUPERSTEP_FINISHED_NODE) &&
+                event.getType() == EventType.NodeCreated) {
+            if (LOG.isInfoEnabled()) {
+                LOG.info("process: superstepFinished signaled");
+            }
+            superstepFinished.signal();
+            eventProcessed = true;
+        } else if (event.getPath().endsWith(APPLICATION_ATTEMPTS_PATH) &&
+                event.getType() == EventType.NodeChildrenChanged) {
+            if (LOG.isInfoEnabled()) {
+                LOG.info("process: applicationAttemptChanged signaled");
+            }
+            applicationAttemptChanged.signal();
+            eventProcessed = true;
+        } else if (event.getPath().contains(MASTER_ELECTION_DIR) &&
+                event.getType() == EventType.NodeChildrenChanged) {
+            if (LOG.isInfoEnabled()) {
+                LOG.info("process: masterElectionChildrenChanged signaled");
+            }
+            masterElectionChildrenChanged.signal();
+            eventProcessed = true;
+        } else if (event.getPath().equals(CLEANED_UP_PATH) &&
+                event.getType() == EventType.NodeChildrenChanged) {
+            if (LOG.isInfoEnabled()) {
+                LOG.info("process: cleanedUpChildrenChanged signaled");
+            }
+            cleanedUpChildrenChanged.signal();
+            eventProcessed = true;
+        }
+
+        if ((processEvent(event) == false) && (eventProcessed == false)) {
+            LOG.warn("process: Unknown and unprocessed event (path=" +
+                     event.getPath() + ", type=" + event.getType() +
+                     ", state=" + event.getState() + ")");
+        }
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/BspServiceMaster.java b/src/main/java/org/apache/giraph/graph/BspServiceMaster.java
new file mode 100644
index 0000000..c580bf3
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/BspServiceMaster.java
@@ -0,0 +1,1737 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import net.iharder.Base64;
+import org.apache.giraph.bsp.ApplicationState;
+import org.apache.giraph.bsp.BspInputFormat;
+import org.apache.giraph.bsp.CentralizedService;
+import org.apache.giraph.bsp.CentralizedServiceMaster;
+import org.apache.giraph.bsp.SuperstepState;
+import org.apache.giraph.graph.GraphMapper.MapFunctions;
+import org.apache.giraph.zk.BspEvent;
+import org.apache.giraph.zk.PredicateLock;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathFilter;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapred.RunningJob;
+import org.apache.hadoop.mapreduce.Counter;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.log4j.Logger;
+import org.apache.zookeeper.CreateMode;
+import org.apache.zookeeper.KeeperException;
+import org.apache.zookeeper.WatchedEvent;
+import org.apache.zookeeper.Watcher.Event.EventType;
+import org.apache.zookeeper.ZooDefs.Ids;
+import org.json.JSONArray;
+import org.json.JSONException;
+import org.json.JSONObject;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutput;
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.TreeSet;
+
+import org.apache.giraph.graph.partition.MasterGraphPartitioner;
+import org.apache.giraph.graph.partition.PartitionOwner;
+import org.apache.giraph.graph.partition.PartitionStats;
+import org.apache.giraph.graph.partition.PartitionUtils;
+import org.apache.giraph.utils.WritableUtils;
+
+/**
+ * ZooKeeper-based implementation of {@link CentralizedService}.
+ */
+@SuppressWarnings("rawtypes")
+public class BspServiceMaster<
+        I extends WritableComparable,
+        V extends Writable,
+        E extends Writable, M extends Writable>
+        extends BspService<I, V, E, M>
+        implements CentralizedServiceMaster<I, V, E, M> {
+    /** Class logger */
+    private static final Logger LOG = Logger.getLogger(BspServiceMaster.class);
+    /** Superstep counter */
+    private Counter superstepCounter = null;
+    /** Vertex counter */
+    private Counter vertexCounter = null;
+    /** Finished vertex counter */
+    private Counter finishedVertexCounter = null;
+    /** Edge counter */
+    private Counter edgeCounter = null;
+    /** Sent messages counter */
+    private Counter sentMessagesCounter = null;
+    /** Workers on this superstep */
+    private Counter currentWorkersCounter = null;
+    /** Current master task partition */
+    private Counter currentMasterTaskPartitionCounter = null;
+    /** Last checkpointed superstep */
+    private Counter lastCheckpointedSuperstepCounter = null;
+    /** Am I the master? */
+    private boolean isMaster = false;
+    /** Max number of workers */
+    private final int maxWorkers;
+    /** Min number of workers */
+    private final int minWorkers;
+    /** Min % responded workers */
+    private final float minPercentResponded;
+    /** Poll period in msecs */
+    private final int msecsPollPeriod;
+    /** Max number of poll attempts */
+    private final int maxPollAttempts;
+    /** Min number of long tails before printing */
+    private final int partitionLongTailMinPrint;
+    /** Last finalized checkpoint */
+    private long lastCheckpointedSuperstep = -1;
+    /** State of the superstep changed */
+    private final BspEvent superstepStateChanged =
+        new PredicateLock();
+    /** Master graph partitioner */
+    private final MasterGraphPartitioner<I, V, E, M> masterGraphPartitioner;
+    /** All the partition stats from the last superstep */
+    private final List<PartitionStats> allPartitionStatsList =
+        new ArrayList<PartitionStats>();
+    /** Counter group name for the Giraph statistics */
+    public String GIRAPH_STATS_COUNTER_GROUP_NAME = "Giraph Stats";
+    /** Aggregator writer */
+    public AggregatorWriter aggregatorWriter;
+
+    public BspServiceMaster(
+            String serverPortList,
+            int sessionMsecTimeout,
+            Mapper<?, ?, ?, ?>.Context context,
+            GraphMapper<I, V, E, M> graphMapper) {
+        super(serverPortList, sessionMsecTimeout, context, graphMapper);
+        registerBspEvent(superstepStateChanged);
+
+        maxWorkers =
+            getConfiguration().getInt(GiraphJob.MAX_WORKERS, -1);
+        minWorkers =
+            getConfiguration().getInt(GiraphJob.MIN_WORKERS, -1);
+        minPercentResponded =
+            getConfiguration().getFloat(GiraphJob.MIN_PERCENT_RESPONDED,
+                                        100.0f);
+        msecsPollPeriod =
+            getConfiguration().getInt(GiraphJob.POLL_MSECS,
+                                      GiraphJob.POLL_MSECS_DEFAULT);
+        maxPollAttempts =
+            getConfiguration().getInt(GiraphJob.POLL_ATTEMPTS,
+                                      GiraphJob.POLL_ATTEMPTS_DEFAULT);
+        partitionLongTailMinPrint = getConfiguration().getInt(
+            GiraphJob.PARTITION_LONG_TAIL_MIN_PRINT,
+            GiraphJob.PARTITION_LONG_TAIL_MIN_PRINT_DEFAULT);
+        masterGraphPartitioner =
+            getGraphPartitionerFactory().createMasterGraphPartitioner();
+    }
+
+    @Override
+    public void setJobState(ApplicationState state,
+                            long applicationAttempt,
+                            long desiredSuperstep) {
+        JSONObject jobState = new JSONObject();
+        try {
+            jobState.put(JSONOBJ_STATE_KEY, state.toString());
+            jobState.put(JSONOBJ_APPLICATION_ATTEMPT_KEY, applicationAttempt);
+            jobState.put(JSONOBJ_SUPERSTEP_KEY, desiredSuperstep);
+        } catch (JSONException e) {
+            throw new RuntimeException("setJobState: Coudn't put " +
+                                       state.toString());
+        }
+        if (LOG.isInfoEnabled()) {
+            LOG.info("setJobState: " + jobState.toString() + " on superstep " +
+                     getSuperstep());
+        }
+        try {
+            getZkExt().createExt(MASTER_JOB_STATE_PATH + "/jobState",
+                                 jobState.toString().getBytes(),
+                                 Ids.OPEN_ACL_UNSAFE,
+                                 CreateMode.PERSISTENT_SEQUENTIAL,
+                                 true);
+        } catch (KeeperException.NodeExistsException e) {
+            throw new IllegalStateException(
+                "setJobState: Imposible that " +
+                MASTER_JOB_STATE_PATH + " already exists!", e);
+        } catch (KeeperException e) {
+            throw new IllegalStateException(
+                "setJobState: Unknown KeeperException for " +
+                MASTER_JOB_STATE_PATH, e);
+        } catch (InterruptedException e) {
+            throw new IllegalStateException(
+                "setJobState: Unknown InterruptedException for " +
+                MASTER_JOB_STATE_PATH, e);
+        }
+
+        if (state == ApplicationState.FAILED) {
+            failJob();
+        }
+    }
+
+    /**
+     * Master uses this to calculate the {@link VertexInputFormat}
+     * input splits and write it to ZooKeeper.
+     *
+     * @param numWorkers Number of available workers
+     * @throws InstantiationException
+     * @throws IllegalAccessException
+     * @throws IOException
+     * @throws InterruptedException
+     */
+    private List<InputSplit> generateInputSplits(int numWorkers) {
+        VertexInputFormat<I, V, E, M> vertexInputFormat =
+            BspUtils.<I, V, E, M>createVertexInputFormat(getConfiguration());
+        List<InputSplit> splits;
+        try {
+            splits = vertexInputFormat.getSplits(getContext(), numWorkers);
+            float samplePercent =
+                getConfiguration().getFloat(
+                    GiraphJob.INPUT_SPLIT_SAMPLE_PERCENT,
+                    GiraphJob.INPUT_SPLIT_SAMPLE_PERCENT_DEFAULT);
+            if (samplePercent != GiraphJob.INPUT_SPLIT_SAMPLE_PERCENT_DEFAULT) {
+                int lastIndex = (int) (samplePercent * splits.size() / 100f);
+                List<InputSplit> sampleSplits = splits.subList(0, lastIndex);
+                LOG.warn("generateInputSplits: Using sampling - Processing " +
+                         "only " + sampleSplits.size() + " instead of " +
+                        splits.size() + " expected splits.");
+                return sampleSplits;
+            } else {
+                if (LOG.isInfoEnabled()) {
+                    LOG.info("generateInputSplits: Got " + splits.size() +
+                            " input splits for " + numWorkers + " workers");
+                }
+                return splits;
+            }
+        } catch (IOException e) {
+            throw new IllegalStateException(
+                "generateInputSplits: Got IOException", e);
+        } catch (InterruptedException e) {
+            throw new IllegalStateException(
+                "generateInputSplits: Got InterruptedException", e);
+        }
+    }
+
+    /**
+     * When there is no salvaging this job, fail it.
+     *
+     * @throws IOException
+     */
+    private void failJob() {
+        LOG.fatal("failJob: Killing job " + getJobId());
+        try {
+            @SuppressWarnings("deprecation")
+            org.apache.hadoop.mapred.JobClient jobClient =
+                new org.apache.hadoop.mapred.JobClient(
+                    (org.apache.hadoop.mapred.JobConf)
+                    getConfiguration());
+            @SuppressWarnings("deprecation")
+            org.apache.hadoop.mapred.JobID jobId =
+                org.apache.hadoop.mapred.JobID.forName(getJobId());
+            RunningJob job = jobClient.getJob(jobId);
+            job.killJob();
+        } catch (IOException e) {
+            throw new RuntimeException(e);
+        }
+    }
+
+    /**
+     * Parse the {@link WorkerInfo} objects from a ZooKeeper path
+     * (and children).
+     *
+     * @param workerInfosPath Path where all the workers are children
+     * @param watch Watch or not?
+     * @return List of workers in that path
+     */
+    private List<WorkerInfo> getWorkerInfosFromPath(String workerInfosPath,
+                                                    boolean watch) {
+        List<WorkerInfo> workerInfoList = new ArrayList<WorkerInfo>();
+        List<String> workerInfoPathList;
+        try {
+            workerInfoPathList =
+                getZkExt().getChildrenExt(workerInfosPath, watch, false, true);
+        } catch (KeeperException e) {
+            throw new IllegalStateException(
+                "getWorkers: Got KeeperException", e);
+        } catch (InterruptedException e) {
+            throw new IllegalStateException(
+                "getWorkers: Got InterruptedStateException", e);
+        }
+        for (String workerInfoPath : workerInfoPathList) {
+            WorkerInfo workerInfo = new WorkerInfo();
+            WritableUtils.readFieldsFromZnode(
+                getZkExt(), workerInfoPath, true, null, workerInfo);
+            workerInfoList.add(workerInfo);
+        }
+        return workerInfoList;
+    }
+
+    /**
+     * Get the healthy and unhealthy {@link WorkerInfo} objects for
+     * a superstep
+     *
+     * @param superstep superstep to check
+     * @param healthyWorkerInfoList filled in with current data
+     * @param unhealthyWorkerInfoList filled in with current data
+     */
+    private void getAllWorkerInfos(
+            long superstep,
+            List<WorkerInfo> healthyWorkerInfoList,
+            List<WorkerInfo> unhealthyWorkerInfoList) {
+        String healthyWorkerInfoPath =
+            getWorkerInfoHealthyPath(getApplicationAttempt(), superstep);
+        String unhealthyWorkerInfoPath =
+            getWorkerInfoUnhealthyPath(getApplicationAttempt(), superstep);
+
+        try {
+            getZkExt().createOnceExt(healthyWorkerInfoPath,
+                                     null,
+                                     Ids.OPEN_ACL_UNSAFE,
+                                     CreateMode.PERSISTENT,
+                                     true);
+        } catch (KeeperException e) {
+            throw new IllegalStateException("getWorkers: KeeperException", e);
+        } catch (InterruptedException e) {
+            throw new IllegalStateException("getWorkers: IllegalStateException"
+                                            , e);
+        }
+
+        try {
+            getZkExt().createOnceExt(unhealthyWorkerInfoPath,
+                                     null,
+                                     Ids.OPEN_ACL_UNSAFE,
+                                     CreateMode.PERSISTENT,
+                                     true);
+        } catch (KeeperException e) {
+            throw new IllegalStateException("getWorkers: KeeperException", e);
+        } catch (InterruptedException e) {
+            throw new IllegalStateException("getWorkers: IllegalStateException"
+                                            , e);
+        }
+
+        List<WorkerInfo> currentHealthyWorkerInfoList =
+            getWorkerInfosFromPath(healthyWorkerInfoPath, true);
+        List<WorkerInfo> currentUnhealthyWorkerInfoList =
+            getWorkerInfosFromPath(unhealthyWorkerInfoPath, false);
+
+        healthyWorkerInfoList.clear();
+        if (currentHealthyWorkerInfoList != null) {
+            for (WorkerInfo healthyWorkerInfo :
+                    currentHealthyWorkerInfoList) {
+                healthyWorkerInfoList.add(healthyWorkerInfo);
+            }
+        }
+
+        unhealthyWorkerInfoList.clear();
+        if (currentUnhealthyWorkerInfoList != null) {
+            for (WorkerInfo unhealthyWorkerInfo :
+                    currentUnhealthyWorkerInfoList) {
+                unhealthyWorkerInfoList.add(unhealthyWorkerInfo);
+            }
+        }
+    }
+
+    /**
+     * Check all the {@link WorkerInfo} objects to ensure that a minimum
+     * number of good workers exists out of the total that have reported.
+     *
+     * @return List of of healthy workers such that the minimum has been
+     *         met, otherwise null
+     */
+    private List<WorkerInfo> checkWorkers() {
+        boolean failJob = true;
+        int pollAttempt = 0;
+        List<WorkerInfo> healthyWorkerInfoList = new ArrayList<WorkerInfo>();
+        List<WorkerInfo> unhealthyWorkerInfoList = new ArrayList<WorkerInfo>();
+        int totalResponses = -1;
+        while (pollAttempt < maxPollAttempts) {
+            getAllWorkerInfos(
+                getSuperstep(), healthyWorkerInfoList, unhealthyWorkerInfoList);
+            totalResponses = healthyWorkerInfoList.size() +
+                unhealthyWorkerInfoList.size();
+            if ((totalResponses * 100.0f / maxWorkers) >=
+                    minPercentResponded) {
+                failJob = false;
+                break;
+            }
+            getContext().setStatus(getGraphMapper().getMapFunctions() + " " +
+                                   "checkWorkers: Only found " +
+                                   totalResponses +
+                                   " responses of " + maxWorkers +
+                                   " needed to start superstep " +
+                                   getSuperstep());
+            if (getWorkerHealthRegistrationChangedEvent().waitMsecs(
+                    msecsPollPeriod)) {
+                if (LOG.isDebugEnabled()) {
+                    LOG.debug("checkWorkers: Got event that health " +
+                              "registration changed, not using poll attempt");
+                }
+                getWorkerHealthRegistrationChangedEvent().reset();
+                continue;
+            }
+            if (LOG.isInfoEnabled()) {
+                LOG.info("checkWorkers: Only found " + totalResponses +
+                         " responses of " + maxWorkers +
+                         " needed to start superstep " +
+                         getSuperstep() + ".  Sleeping for " +
+                         msecsPollPeriod + " msecs and used " + pollAttempt +
+                         " of " + maxPollAttempts + " attempts.");
+                // Find the missing workers if there are only a few
+                if ((maxWorkers - totalResponses) <=
+                        partitionLongTailMinPrint) {
+                    Set<Integer> partitionSet = new TreeSet<Integer>();
+                    for (WorkerInfo workerInfo : healthyWorkerInfoList) {
+                        partitionSet.add(workerInfo.getPartitionId());
+                    }
+                    for (WorkerInfo workerInfo : unhealthyWorkerInfoList) {
+                        partitionSet.add(workerInfo.getPartitionId());
+                    }
+                    for (int i = 1; i <= maxWorkers; ++i) {
+                        if (partitionSet.contains(new Integer(i))) {
+                            continue;
+                        } else if (i == getTaskPartition()) {
+                            continue;
+                        } else {
+                            LOG.info("checkWorkers: No response from "+
+                                     "partition " + i + " (could be master)");
+                        }
+                    }
+                }
+            }
+            ++pollAttempt;
+        }
+        if (failJob) {
+            LOG.error("checkWorkers: Did not receive enough processes in " +
+                      "time (only " + totalResponses + " of " +
+                      minWorkers + " required).  This occurs if you do not " +
+                      "have enough map tasks available simultaneously on " +
+                      "your Hadoop instance to fulfill the number of " +
+                      "requested workers.");
+            return null;
+        }
+
+        if (healthyWorkerInfoList.size() < minWorkers) {
+            LOG.error("checkWorkers: Only " + healthyWorkerInfoList.size() +
+                      " available when " + minWorkers + " are required.");
+            return null;
+        }
+
+        getContext().setStatus(getGraphMapper().getMapFunctions() + " " +
+            "checkWorkers: Done - Found " + totalResponses +
+            " responses of " + maxWorkers + " needed to start superstep " +
+            getSuperstep());
+
+        return healthyWorkerInfoList;
+    }
+
+    @Override
+    public int createInputSplits() {
+        // Only the 'master' should be doing this.  Wait until the number of
+        // processes that have reported health exceeds the minimum percentage.
+        // If the minimum percentage is not met, fail the job.  Otherwise
+        // generate the input splits
+        try {
+            if (getZkExt().exists(INPUT_SPLIT_PATH, false) != null) {
+                LOG.info(INPUT_SPLIT_PATH +
+                         " already exists, no need to create");
+                return Integer.parseInt(
+                    new String(
+                        getZkExt().getData(INPUT_SPLIT_PATH, false, null)));
+            }
+        } catch (KeeperException.NoNodeException e) {
+            if (LOG.isInfoEnabled()) {
+                LOG.info("createInputSplits: Need to create the " +
+                         "input splits at " + INPUT_SPLIT_PATH);
+            }
+        } catch (KeeperException e) {
+            throw new IllegalStateException(
+                "createInputSplits: KeeperException", e);
+        } catch (InterruptedException e) {
+            throw new IllegalStateException(
+                "createInputSplits: IllegalStateException", e);
+        }
+
+        // When creating znodes, in case the master has already run, resume
+        // where it left off.
+        List<WorkerInfo> healthyWorkerInfoList = checkWorkers();
+        if (healthyWorkerInfoList == null) {
+            setJobState(ApplicationState.FAILED, -1, -1);
+            return -1;
+        }
+
+        // Note that the input splits may only be a sample if
+        // INPUT_SPLIT_SAMPLE_PERCENT is set to something other than 100
+        List<InputSplit> splitList =
+            generateInputSplits(healthyWorkerInfoList.size());
+        if (healthyWorkerInfoList.size() > splitList.size()) {
+            LOG.warn("createInputSplits: Number of inputSplits="
+                     + splitList.size() + " < " +
+                     healthyWorkerInfoList.size() +
+                     "=number of healthy processes, " +
+                     "some workers will be not used");
+        }
+        String inputSplitPath = null;
+        for (int i = 0; i< splitList.size(); ++i) {
+            try {
+                ByteArrayOutputStream byteArrayOutputStream =
+                    new ByteArrayOutputStream();
+                DataOutput outputStream =
+                    new DataOutputStream(byteArrayOutputStream);
+                InputSplit inputSplit = splitList.get(i);
+                Text.writeString(outputStream,
+                                 inputSplit.getClass().getName());
+                ((Writable) inputSplit).write(outputStream);
+                inputSplitPath = INPUT_SPLIT_PATH + "/" + i;
+                getZkExt().createExt(inputSplitPath,
+                                     byteArrayOutputStream.toByteArray(),
+                                     Ids.OPEN_ACL_UNSAFE,
+                                     CreateMode.PERSISTENT,
+                                     true);
+                if (LOG.isDebugEnabled()) {
+                    LOG.debug("createInputSplits: Created input split " +
+                              "with index " + i + " serialized as " +
+                              byteArrayOutputStream.toString());
+                }
+            } catch (KeeperException.NodeExistsException e) {
+                if (LOG.isInfoEnabled()) {
+                    LOG.info("createInputSplits: Node " +
+                             inputSplitPath + " already exists.");
+                }
+            } catch (KeeperException e) {
+                throw new IllegalStateException(
+                    "createInputSplits: KeeperException", e);
+            } catch (InterruptedException e) {
+                throw new IllegalStateException(
+                    "createInputSplits: IllegalStateException", e);
+            } catch (IOException e) {
+                throw new IllegalStateException(
+                    "createInputSplits: IOException", e);
+            }
+        }
+
+        // Let workers know they can start trying to load the input splits
+        try {
+            getZkExt().create(INPUT_SPLITS_ALL_READY_PATH,
+                        null,
+                        Ids.OPEN_ACL_UNSAFE,
+                        CreateMode.PERSISTENT);
+        } catch (KeeperException.NodeExistsException e) {
+            LOG.info("createInputSplits: Node " +
+                     INPUT_SPLITS_ALL_READY_PATH + " already exists.");
+        } catch (KeeperException e) {
+            throw new IllegalStateException(
+                "createInputSplits: KeeperException", e);
+        } catch (InterruptedException e) {
+            throw new IllegalStateException(
+                "createInputSplits: IllegalStateException", e);
+        }
+
+        return splitList.size();
+    }
+
+    /**
+     * Read the finalized checkpoint file and associated metadata files for the
+     * checkpoint.  Modifies the {@link PartitionOwner} objects to get the
+     * checkpoint prefixes.  It is an optimization to prevent all workers from
+     * searching all the files.  Also read in the aggregator data from the
+     * finalized checkpoint file and setting it.
+     *
+     * @param superstep Checkpoint set to examine.
+     * @param partitionOwners Partition owners to modify with checkpoint
+     *        prefixes
+     * @throws IOException
+     * @throws InterruptedException
+     * @throws KeeperException
+     */
+    private void prepareCheckpointRestart(
+            long superstep,
+            Collection<PartitionOwner> partitionOwners)
+            throws IOException, KeeperException, InterruptedException {
+        FileSystem fs = getFs();
+        List<Path> validMetadataPathList = new ArrayList<Path>();
+        String finalizedCheckpointPath =
+            getCheckpointBasePath(superstep) + CHECKPOINT_FINALIZED_POSTFIX;
+        DataInputStream finalizedStream =
+            fs.open(new Path(finalizedCheckpointPath));
+        int prefixFileCount = finalizedStream.readInt();
+        for (int i = 0; i < prefixFileCount; ++i) {
+            String metadataFilePath =
+                finalizedStream.readUTF() + CHECKPOINT_METADATA_POSTFIX;
+            validMetadataPathList.add(new Path(metadataFilePath));
+        }
+
+        // Set the merged aggregator data if it exists.
+        int aggregatorDataSize = finalizedStream.readInt();
+        if (aggregatorDataSize > 0) {
+            byte [] aggregatorZkData = new byte[aggregatorDataSize];
+            int actualDataRead =
+                finalizedStream.read(aggregatorZkData, 0, aggregatorDataSize);
+            if (actualDataRead != aggregatorDataSize) {
+                throw new RuntimeException(
+                    "prepareCheckpointRestart: Only read " + actualDataRead +
+                    " of " + aggregatorDataSize + " aggregator bytes from " +
+                    finalizedCheckpointPath);
+            }
+            String mergedAggregatorPath =
+                getMergedAggregatorPath(getApplicationAttempt(), superstep - 1);
+            if (LOG.isInfoEnabled()) {
+                LOG.info("prepareCheckpointRestart: Reloading merged " +
+                         "aggregator " + "data '" +
+                         Arrays.toString(aggregatorZkData) +
+                         "' to previous checkpoint in path " +
+                         mergedAggregatorPath);
+            }
+            if (getZkExt().exists(mergedAggregatorPath, false) == null) {
+                getZkExt().createExt(mergedAggregatorPath,
+                                     aggregatorZkData,
+                                     Ids.OPEN_ACL_UNSAFE,
+                                     CreateMode.PERSISTENT,
+                                     true);
+            }
+            else {
+                getZkExt().setData(mergedAggregatorPath, aggregatorZkData, -1);
+            }
+        }
+        finalizedStream.close();
+
+        Map<Integer, PartitionOwner> idOwnerMap =
+            new HashMap<Integer, PartitionOwner>();
+        for (PartitionOwner partitionOwner : partitionOwners) {
+            if (idOwnerMap.put(partitionOwner.getPartitionId(),
+                               partitionOwner) != null) {
+                throw new IllegalStateException(
+                    "prepareCheckpointRestart: Duplicate partition " +
+                    partitionOwner);
+            }
+        }
+        // Reading the metadata files.  Simply assign each partition owner
+        // the correct file prefix based on the partition id.
+        for (Path metadataPath : validMetadataPathList) {
+            String checkpointFilePrefix = metadataPath.toString();
+            checkpointFilePrefix =
+                checkpointFilePrefix.substring(
+                0,
+                checkpointFilePrefix.length() -
+                CHECKPOINT_METADATA_POSTFIX.length());
+            DataInputStream metadataStream = fs.open(metadataPath);
+            long partitions = metadataStream.readInt();
+            for (long i = 0; i < partitions; ++i) {
+                long dataPos = metadataStream.readLong();
+                int partitionId = metadataStream.readInt();
+                PartitionOwner partitionOwner = idOwnerMap.get(partitionId);
+                if (LOG.isInfoEnabled()) {
+                    LOG.info("prepareSuperstepRestart: File " + metadataPath +
+                              " with position " + dataPos +
+                              ", partition id = " + partitionId +
+                              " assigned to " + partitionOwner);
+                }
+                partitionOwner.setCheckpointFilesPrefix(checkpointFilePrefix);
+            }
+            metadataStream.close();
+        }
+    }
+
+    @Override
+    public void setup() {
+        // Might have to manually load a checkpoint.
+        // In that case, the input splits are not set, they will be faked by
+        // the checkpoint files.  Each checkpoint file will be an input split
+        // and the input split
+        superstepCounter = getContext().getCounter(
+            GIRAPH_STATS_COUNTER_GROUP_NAME, "Superstep");
+        vertexCounter = getContext().getCounter(
+            GIRAPH_STATS_COUNTER_GROUP_NAME, "Aggregate vertices");
+        finishedVertexCounter = getContext().getCounter(
+            GIRAPH_STATS_COUNTER_GROUP_NAME, "Aggregate finished vertices");
+        edgeCounter = getContext().getCounter(
+            GIRAPH_STATS_COUNTER_GROUP_NAME, "Aggregate edges");
+        sentMessagesCounter = getContext().getCounter(
+            GIRAPH_STATS_COUNTER_GROUP_NAME, "Sent messages");
+        currentWorkersCounter = getContext().getCounter(
+            GIRAPH_STATS_COUNTER_GROUP_NAME, "Current workers");
+        currentMasterTaskPartitionCounter = getContext().getCounter(
+            GIRAPH_STATS_COUNTER_GROUP_NAME, "Current master task partition");
+        lastCheckpointedSuperstepCounter = getContext().getCounter(
+            GIRAPH_STATS_COUNTER_GROUP_NAME, "Last checkpointed superstep");
+        if (getRestartedSuperstep() != UNSET_SUPERSTEP) {
+            superstepCounter.increment(getRestartedSuperstep());
+        }
+    }
+
+    @Override
+    public boolean becomeMaster() {
+        // Create my bid to become the master, then try to become the worker
+        // or return false.
+        String myBid = null;
+        try {
+            myBid =
+                getZkExt().createExt(MASTER_ELECTION_PATH +
+                    "/" + getHostnamePartitionId(),
+                    null,
+                    Ids.OPEN_ACL_UNSAFE,
+                    CreateMode.EPHEMERAL_SEQUENTIAL,
+                    true);
+        } catch (KeeperException e) {
+            throw new IllegalStateException(
+                "becomeMaster: KeeperException", e);
+        } catch (InterruptedException e) {
+            throw new IllegalStateException(
+                "becomeMaster: IllegalStateException", e);
+        }
+        while (true) {
+            JSONObject jobState = getJobState();
+            try {
+                if ((jobState != null) &&
+                    ApplicationState.valueOf(
+                        jobState.getString(JSONOBJ_STATE_KEY)) ==
+                            ApplicationState.FINISHED) {
+                    LOG.info("becomeMaster: Job is finished, " +
+                             "give up trying to be the master!");
+                    isMaster = false;
+                    return isMaster;
+                }
+            } catch (JSONException e) {
+                throw new IllegalStateException(
+                    "becomeMaster: Couldn't get state from " + jobState, e);
+            }
+            try {
+                List<String> masterChildArr =
+                    getZkExt().getChildrenExt(
+                        MASTER_ELECTION_PATH, true, true, true);
+                if (LOG.isInfoEnabled()) {
+                    LOG.info("becomeMaster: First child is '" +
+                             masterChildArr.get(0) + "' and my bid is '" +
+                             myBid + "'");
+                }
+                if (masterChildArr.get(0).equals(myBid)) {
+                    currentMasterTaskPartitionCounter.increment(
+                        getTaskPartition() -
+                        currentMasterTaskPartitionCounter.getValue());
+                    aggregatorWriter = 
+                        BspUtils.createAggregatorWriter(getConfiguration());
+                    try {
+                        aggregatorWriter.initialize(getContext(),
+                                                    getApplicationAttempt());
+                    } catch (IOException e) {
+                        throw new IllegalStateException("becomeMaster: " +
+                            "Couldn't initialize aggregatorWriter", e);
+                    }
+                    LOG.info("becomeMaster: I am now the master!");
+                    isMaster = true;
+                    return isMaster;
+                }
+                LOG.info("becomeMaster: Waiting to become the master...");
+                getMasterElectionChildrenChangedEvent().waitForever();
+                getMasterElectionChildrenChangedEvent().reset();
+            } catch (KeeperException e) {
+                throw new IllegalStateException(
+                    "becomeMaster: KeeperException", e);
+            } catch (InterruptedException e) {
+                throw new IllegalStateException(
+                    "becomeMaster: IllegalStateException", e);
+            }
+        }
+    }
+
+    /**
+     * Collect and aggregate the worker statistics for a particular superstep.
+     *
+     * @param superstep Superstep to aggregate on
+     * @return Global statistics aggregated on all worker statistics
+     */
+    private GlobalStats aggregateWorkerStats(long superstep) {
+        Class<? extends Writable> partitionStatsClass =
+            masterGraphPartitioner.createPartitionStats().getClass();
+        GlobalStats globalStats = new GlobalStats();
+        // Get the stats from the all the worker selected nodes
+        String workerFinishedPath =
+            getWorkerFinishedPath(getApplicationAttempt(), superstep);
+        List<String> workerFinishedPathList = null;
+        try {
+            workerFinishedPathList =
+                getZkExt().getChildrenExt(
+                    workerFinishedPath, false, false, true);
+        } catch (KeeperException e) {
+            throw new IllegalStateException(
+                "aggregateWorkerStats: KeeperException", e);
+        } catch (InterruptedException e) {
+            throw new IllegalStateException(
+                "aggregateWorkerStats: InterruptedException", e);
+        }
+
+        allPartitionStatsList.clear();
+        for (String finishedPath : workerFinishedPathList) {
+            JSONObject workerFinishedInfoObj = null;
+            try {
+                byte [] zkData =
+                    getZkExt().getData(finishedPath, false, null);
+                workerFinishedInfoObj = new JSONObject(new String(zkData));
+                List<? extends Writable> writableList =
+                    WritableUtils.readListFieldsFromByteArray(
+                        Base64.decode(workerFinishedInfoObj.getString(
+                            JSONOBJ_PARTITION_STATS_KEY)),
+                        partitionStatsClass,
+                        getConfiguration());
+                for (Writable writable : writableList) {
+                    globalStats.addPartitionStats((PartitionStats) writable);
+                    globalStats.addMessageCount(
+                        workerFinishedInfoObj.getLong(
+                            JSONOBJ_NUM_MESSAGES_KEY));
+                    allPartitionStatsList.add((PartitionStats) writable);
+                }
+            } catch (JSONException e) {
+                throw new IllegalStateException(
+                    "aggregateWorkerStats: JSONException", e);
+            } catch (KeeperException e) {
+                throw new IllegalStateException(
+                    "aggregateWorkerStats: KeeperException", e);
+            } catch (InterruptedException e) {
+                throw new IllegalStateException(
+                    "aggregateWorkerStats: InterruptedException", e);
+            } catch (IOException e) {
+                throw new IllegalStateException(
+                    "aggregateWorkerStats: IOException", e);
+            }
+         }
+
+        if (LOG.isInfoEnabled()) {
+            LOG.info("aggregateWorkerStats: Aggregation found " + globalStats +
+                     " on superstep = " + getSuperstep());
+        }
+        return globalStats;
+    }
+
+    /**
+     * Get the aggregator values for a particular superstep,
+     * aggregate and save them. Does nothing on the INPUT_SUPERSTEP.
+     *
+     * @param superstep superstep to check
+     */
+    private void collectAndProcessAggregatorValues(long superstep) {
+        if (superstep == INPUT_SUPERSTEP) {
+            // Nothing to collect on the input superstep
+            return;
+        }
+        Map<String, Aggregator<? extends Writable>> aggregatorMap =
+            new TreeMap<String, Aggregator<? extends Writable>>();
+        String workerFinishedPath =
+            getWorkerFinishedPath(getApplicationAttempt(), superstep);
+        List<String> hostnameIdPathList = null;
+        try {
+            hostnameIdPathList =
+                getZkExt().getChildrenExt(
+                    workerFinishedPath, false, false, true);
+        } catch (KeeperException e) {
+            throw new IllegalStateException(
+                "collectAndProcessAggregatorValues: KeeperException", e);
+        } catch (InterruptedException e) {
+            throw new IllegalStateException(
+                "collectAndProcessAggregatorValues: InterruptedException", e);
+        }
+
+        for (String hostnameIdPath : hostnameIdPathList) {
+            JSONObject workerFinishedInfoObj = null;
+            JSONArray aggregatorArray = null;
+            try {
+                byte [] zkData =
+                    getZkExt().getData(hostnameIdPath, false, null);
+                workerFinishedInfoObj = new JSONObject(new String(zkData));
+            } catch (KeeperException e) {
+                throw new IllegalStateException(
+                    "collectAndProcessAggregatorValues: KeeperException", e);
+            } catch (InterruptedException e) {
+                throw new IllegalStateException(
+                    "collectAndProcessAggregatorValues: InterruptedException",
+                    e);
+            } catch (JSONException e) {
+                throw new IllegalStateException(
+                    "collectAndProcessAggregatorValues: JSONException", e);
+            }
+            try {
+                aggregatorArray = workerFinishedInfoObj.getJSONArray(
+                    JSONOBJ_AGGREGATOR_VALUE_ARRAY_KEY);
+            } catch (JSONException e) {
+                if (LOG.isDebugEnabled()) {
+                    LOG.debug("collectAndProcessAggregatorValues: " +
+                              "No aggregators" + " for " + hostnameIdPath);
+                }
+                continue;
+            }
+            for (int i = 0; i < aggregatorArray.length(); ++i) {
+                try {
+                    if (LOG.isInfoEnabled()) {
+                        LOG.info("collectAndProcessAggregatorValues: " +
+                                 "Getting aggregators from " +
+                                 aggregatorArray.getJSONObject(i));
+                    }
+                    String aggregatorName =
+                        aggregatorArray.getJSONObject(i).getString(
+                            AGGREGATOR_NAME_KEY);
+                    String aggregatorClassName =
+                        aggregatorArray.getJSONObject(i).getString(
+                            AGGREGATOR_CLASS_NAME_KEY);
+                    @SuppressWarnings("unchecked")
+                    Aggregator<Writable> aggregator =
+                        (Aggregator<Writable>) aggregatorMap.get(aggregatorName);
+                    boolean firstTime = false;
+                    if (aggregator == null) {
+                        @SuppressWarnings("unchecked")
+                        Aggregator<Writable> aggregatorWritable =
+                            (Aggregator<Writable>) getAggregator(aggregatorName);
+                        aggregator = aggregatorWritable;
+                        if (aggregator == null) {
+                            @SuppressWarnings("unchecked")
+                            Class<? extends Aggregator<Writable>> aggregatorClass =
+                                (Class<? extends Aggregator<Writable>>)
+                                    Class.forName(aggregatorClassName);
+                            aggregator = registerAggregator(
+                                aggregatorName,
+                                aggregatorClass);
+                        }
+                        aggregatorMap.put(aggregatorName, aggregator);
+                        firstTime = true;
+                    }
+                    Writable aggregatorValue =
+                        aggregator.createAggregatedValue();
+                    InputStream input =
+                        new ByteArrayInputStream(
+                            Base64.decode(
+                                aggregatorArray.getJSONObject(i).
+                                getString(AGGREGATOR_VALUE_KEY)));
+                    aggregatorValue.readFields(new DataInputStream(input));
+                    if (LOG.isDebugEnabled()) {
+                        LOG.debug("collectAndProcessAggregatorValues: " +
+                                  "aggregator value size=" + input.available() +
+                                  " for aggregator=" + aggregatorName +
+                                  " value=" + aggregatorValue);
+                    }
+                    if (firstTime) {
+                        aggregator.setAggregatedValue(aggregatorValue);
+                    } else {
+                        aggregator.aggregate(aggregatorValue);
+                    }
+                } catch (IOException e) {
+                    throw new IllegalStateException(
+                        "collectAndProcessAggregatorValues: " +
+                        "IOException when reading aggregator data " +
+                        aggregatorArray, e);
+                } catch (JSONException e) {
+                    throw new IllegalStateException(
+                        "collectAndProcessAggregatorValues: " +
+                        "JSONException when reading aggregator data " +
+                        aggregatorArray, e);
+                } catch (ClassNotFoundException e) {
+                    throw new IllegalStateException(
+                        "collectAndProcessAggregatorValues: " +
+                        "ClassNotFoundException when reading aggregator data " +
+                        aggregatorArray, e);
+                } catch (InstantiationException e) {
+                    throw new IllegalStateException(
+                        "collectAndProcessAggregatorValues: " +
+                        "InstantiationException when reading aggregator data " +
+                        aggregatorArray, e);
+                } catch (IllegalAccessException e) {
+                    throw new IllegalStateException(
+                        "collectAndProcessAggregatorValues: " +
+                        "IOException when reading aggregator data " +
+                        aggregatorArray, e);
+                }
+            }
+        }
+        if (aggregatorMap.size() > 0) {
+            String mergedAggregatorPath =
+                getMergedAggregatorPath(getApplicationAttempt(), superstep);
+            byte [] zkData = null;
+            JSONArray aggregatorArray = new JSONArray();
+            for (Map.Entry<String, Aggregator<? extends Writable>> entry :
+                    aggregatorMap.entrySet()) {
+                try {
+                    ByteArrayOutputStream outputStream =
+                        new ByteArrayOutputStream();
+                    DataOutput output = new DataOutputStream(outputStream);
+                    entry.getValue().getAggregatedValue().write(output);
+
+                    JSONObject aggregatorObj = new JSONObject();
+                    aggregatorObj.put(AGGREGATOR_NAME_KEY,
+                                      entry.getKey());
+                    aggregatorObj.put(
+                        AGGREGATOR_VALUE_KEY,
+                        Base64.encodeBytes(outputStream.toByteArray()));
+                    aggregatorArray.put(aggregatorObj);
+                    if (LOG.isInfoEnabled()) {
+                        LOG.info("collectAndProcessAggregatorValues: " +
+                                 "Trying to add aggregatorObj " +
+                                 aggregatorObj + "(" +
+                                 entry.getValue().getAggregatedValue() +
+                                 ") to merged aggregator path " +
+                                 mergedAggregatorPath);
+                    }
+                } catch (IOException e) {
+                    throw new IllegalStateException(
+                        "collectAndProcessAggregatorValues: " +
+                        "IllegalStateException", e);
+                } catch (JSONException e) {
+                    throw new IllegalStateException(
+                        "collectAndProcessAggregatorValues: JSONException", e);
+                }
+            }
+            try {
+                zkData = aggregatorArray.toString().getBytes();
+                getZkExt().createExt(mergedAggregatorPath,
+                                     zkData,
+                                     Ids.OPEN_ACL_UNSAFE,
+                                     CreateMode.PERSISTENT,
+                                     true);
+            } catch (KeeperException.NodeExistsException e) {
+                LOG.warn("collectAndProcessAggregatorValues: " +
+                         mergedAggregatorPath+
+                         " already exists!");
+            } catch (KeeperException e) {
+                throw new IllegalStateException(
+                    "collectAndProcessAggregatorValues: KeeperException", e);
+            } catch (InterruptedException e) {
+                throw new IllegalStateException(
+                    "collectAndProcessAggregatorValues: IllegalStateException",
+                    e);
+            }
+            if (LOG.isInfoEnabled()) {
+                LOG.info("collectAndProcessAggregatorValues: Finished " +
+                         "loading " +
+                         mergedAggregatorPath+ " with aggregator values " +
+                         aggregatorArray);
+            }
+        }
+    }
+
+    /**
+     * Finalize the checkpoint file prefixes by taking the chosen workers and
+     * writing them to a finalized file.  Also write out the master
+     * aggregated aggregator array from the previous superstep.
+     *
+     * @param superstep superstep to finalize
+     * @param chosenWorkerList list of chosen workers that will be finalized
+     * @throws IOException
+     * @throws InterruptedException
+     * @throws KeeperException
+     */
+    private void finalizeCheckpoint(
+            long superstep,
+            List<WorkerInfo> chosenWorkerInfoList)
+            throws IOException, KeeperException, InterruptedException {
+        Path finalizedCheckpointPath =
+            new Path(getCheckpointBasePath(superstep) +
+                     CHECKPOINT_FINALIZED_POSTFIX);
+        try {
+            getFs().delete(finalizedCheckpointPath, false);
+        } catch (IOException e) {
+            LOG.warn("finalizedValidCheckpointPrefixes: Removed old file " +
+                     finalizedCheckpointPath);
+        }
+
+        // Format:
+        // <number of files>
+        // <used file prefix 0><used file prefix 1>...
+        // <aggregator data length><aggregators as a serialized JSON byte array>
+        FSDataOutputStream finalizedOutputStream =
+            getFs().create(finalizedCheckpointPath);
+        finalizedOutputStream.writeInt(chosenWorkerInfoList.size());
+        for (WorkerInfo chosenWorkerInfo : chosenWorkerInfoList) {
+            String chosenWorkerInfoPrefix =
+                getCheckpointBasePath(superstep) + "." +
+                chosenWorkerInfo.getHostnameId();
+            finalizedOutputStream.writeUTF(chosenWorkerInfoPrefix);
+        }
+        String mergedAggregatorPath =
+            getMergedAggregatorPath(getApplicationAttempt(), superstep - 1);
+        if (getZkExt().exists(mergedAggregatorPath, false) != null) {
+            byte [] aggregatorZkData =
+                getZkExt().getData(mergedAggregatorPath, false, null);
+            finalizedOutputStream.writeInt(aggregatorZkData.length);
+            finalizedOutputStream.write(aggregatorZkData);
+        }
+        else {
+            finalizedOutputStream.writeInt(0);
+        }
+        finalizedOutputStream.close();
+        lastCheckpointedSuperstep = superstep;
+        lastCheckpointedSuperstepCounter.increment(superstep -
+            lastCheckpointedSuperstepCounter.getValue());
+    }
+
+    /**
+     * Assign the partitions for this superstep.  If there are changes,
+     * the workers will know how to do the exchange.  If this was a restarted
+     * superstep, then make sure to provide information on where to find the
+     * checkpoint file.
+     *
+     * @param allPartitionStatsList All partition stats
+     * @param chosenWorkerInfoList All the chosen worker infos
+     * @param masterGraphPartitioner Master graph partitioner
+     */
+    private void assignPartitionOwners(
+            List<PartitionStats> allPartitionStatsList,
+            List<WorkerInfo> chosenWorkerInfoList,
+            MasterGraphPartitioner<I, V, E, M> masterGraphPartitioner) {
+        Collection<PartitionOwner> partitionOwners;
+        if (getSuperstep() == INPUT_SUPERSTEP ||
+                getSuperstep() == getRestartedSuperstep()) {
+            partitionOwners =
+                masterGraphPartitioner.createInitialPartitionOwners(
+                    chosenWorkerInfoList, maxWorkers);
+            if (partitionOwners.isEmpty()) {
+                throw new IllegalStateException(
+                    "assignAndExchangePartitions: No partition owners set");
+            }
+        } else {
+            partitionOwners =
+                masterGraphPartitioner.generateChangedPartitionOwners(
+                    allPartitionStatsList,
+                    chosenWorkerInfoList,
+                    maxWorkers,
+                    getSuperstep());
+
+            PartitionUtils.analyzePartitionStats(partitionOwners,
+                                                 allPartitionStatsList);
+        }
+
+        // If restarted, prepare the checkpoint restart
+        if (getRestartedSuperstep() == getSuperstep()) {
+            try {
+                prepareCheckpointRestart(getSuperstep(), partitionOwners);
+            } catch (IOException e) {
+                throw new IllegalStateException(
+                    "assignPartitionOwners: IOException on preparing", e);
+            } catch (KeeperException e) {
+                throw new IllegalStateException(
+                    "assignPartitionOwners: KeeperException on preparing", e);
+            } catch (InterruptedException e) {
+                throw new IllegalStateException(
+                    "assignPartitionOwners: InteruptedException on preparing",
+                    e);
+            }
+        }
+
+        // There will be some exchange of partitions
+        if (!partitionOwners.isEmpty()) {
+            String vertexExchangePath =
+                getPartitionExchangePath(getApplicationAttempt(),
+                                         getSuperstep());
+            try {
+                getZkExt().createOnceExt(vertexExchangePath,
+                                         null,
+                                         Ids.OPEN_ACL_UNSAFE,
+                                         CreateMode.PERSISTENT,
+                                         true);
+            } catch (KeeperException e) {
+                throw new IllegalStateException(
+                    "assignPartitionOwners: KeeperException creating " +
+                    vertexExchangePath);
+            } catch (InterruptedException e) {
+                throw new IllegalStateException(
+                    "assignPartitionOwners: InterruptedException creating " +
+                    vertexExchangePath);
+            }
+        }
+
+        // Workers are waiting for these assignments
+        String partitionAssignmentsPath =
+            getPartitionAssignmentsPath(getApplicationAttempt(),
+                                        getSuperstep());
+        WritableUtils.writeListToZnode(
+            getZkExt(),
+            partitionAssignmentsPath,
+            -1,
+            new ArrayList<Writable>(partitionOwners));
+    }
+
+    /**
+     * Check whether the workers chosen for this superstep are still alive
+     *
+     * @param chosenWorkerHealthPath Path to the healthy workers in ZooKeeper
+     * @param chosenWorkerList List of the healthy workers
+     * @return true if they are all alive, false otherwise.
+     * @throws InterruptedException
+     * @throws KeeperException
+     */
+    private boolean superstepChosenWorkerAlive(
+            String chosenWorkerInfoHealthPath,
+            List<WorkerInfo> chosenWorkerInfoList)
+            throws KeeperException, InterruptedException {
+        List<WorkerInfo> chosenWorkerInfoHealthyList =
+            getWorkerInfosFromPath(chosenWorkerInfoHealthPath, false);
+        Set<WorkerInfo> chosenWorkerInfoHealthySet =
+            new HashSet<WorkerInfo>(chosenWorkerInfoHealthyList);
+        boolean allChosenWorkersHealthy = true;
+        for (WorkerInfo chosenWorkerInfo : chosenWorkerInfoList) {
+            if (!chosenWorkerInfoHealthySet.contains(chosenWorkerInfo)) {
+                allChosenWorkersHealthy = false;
+                LOG.error("superstepChosenWorkerAlive: Missing chosen " +
+                          "worker " + chosenWorkerInfo +
+                          " on superstep " + getSuperstep());
+            }
+        }
+        return allChosenWorkersHealthy;
+    }
+
+    @Override
+    public void restartFromCheckpoint(long checkpoint) {
+        // Process:
+        // 1. Remove all old input split data
+        // 2. Increase the application attempt and set to the correct checkpoint
+        // 3. Send command to all workers to restart their tasks
+        try {
+            getZkExt().deleteExt(INPUT_SPLIT_PATH, -1, true);
+        } catch (InterruptedException e) {
+            throw new RuntimeException(
+                "retartFromCheckpoint: InterruptedException", e);
+        } catch (KeeperException e) {
+            throw new RuntimeException(
+                "retartFromCheckpoint: KeeperException", e);
+        }
+        setApplicationAttempt(getApplicationAttempt() + 1);
+        setCachedSuperstep(checkpoint);
+        setRestartedSuperstep(checkpoint);
+        setJobState(ApplicationState.START_SUPERSTEP,
+                    getApplicationAttempt(),
+                    checkpoint);
+    }
+
+    /**
+     * Only get the finalized checkpoint files
+     */
+    public static class FinalizedCheckpointPathFilter implements PathFilter {
+        @Override
+        public boolean accept(Path path) {
+            if (path.getName().endsWith(
+                    BspService.CHECKPOINT_FINALIZED_POSTFIX)) {
+                return true;
+            } else {
+                return false;
+            }
+        }
+    }
+
+    @Override
+    public long getLastGoodCheckpoint() throws IOException {
+        // Find the last good checkpoint if none have been written to the
+        // knowledge of this master
+        if (lastCheckpointedSuperstep == -1) {
+            FileStatus[] fileStatusArray =
+                getFs().listStatus(new Path(CHECKPOINT_BASE_PATH),
+                                   new FinalizedCheckpointPathFilter());
+            if (fileStatusArray == null) {
+                return -1;
+            }
+            Arrays.sort(fileStatusArray);
+            lastCheckpointedSuperstep = getCheckpoint(
+                fileStatusArray[fileStatusArray.length - 1].getPath());
+            if (LOG.isInfoEnabled()) {
+                LOG.info("getLastGoodCheckpoint: Found last good checkpoint " +
+                         lastCheckpointedSuperstep + " from " +
+                         fileStatusArray[fileStatusArray.length - 1].
+                         getPath().toString());
+            }
+        }
+        return lastCheckpointedSuperstep;
+    }
+
+    /**
+     * Wait for a set of workers to signal that they are done with the
+     * barrier.
+     *
+     * @param finishedWorkerPath Path to where the workers will register their
+     *        hostname and id
+     * @param workerInfoList List of the workers to wait for
+     * @return True if barrier was successful, false if there was a worker
+     *         failure
+     */
+    private boolean barrierOnWorkerList(String finishedWorkerPath,
+                                        List<WorkerInfo> workerInfoList,
+                                        BspEvent event) {
+        try {
+            getZkExt().createOnceExt(finishedWorkerPath,
+                                     null,
+                                     Ids.OPEN_ACL_UNSAFE,
+                                     CreateMode.PERSISTENT,
+                                     true);
+        } catch (KeeperException e) {
+            throw new IllegalStateException(
+                "barrierOnWorkerList: KeeperException - Couldn't create " +
+                finishedWorkerPath, e);
+        } catch (InterruptedException e) {
+            throw new IllegalStateException(
+                "barrierOnWorkerList: InterruptedException - Couldn't create " +
+                finishedWorkerPath, e);
+        }
+        List<String> hostnameIdList =
+            new ArrayList<String>(workerInfoList.size());
+        for (WorkerInfo workerInfo : workerInfoList) {
+            hostnameIdList.add(workerInfo.getHostnameId());
+        }
+        String workerInfoHealthyPath =
+            getWorkerInfoHealthyPath(getApplicationAttempt(), getSuperstep());
+        List<String> finishedHostnameIdList;
+        long nextInfoMillis = System.currentTimeMillis();
+        while (true) {
+            try {
+                finishedHostnameIdList =
+                    getZkExt().getChildrenExt(finishedWorkerPath,
+                                              true,
+                                              false,
+                                              false);
+            } catch (KeeperException e) {
+                throw new IllegalStateException(
+                    "barrierOnWorkerList: KeeperException - Couldn't get " +
+                    "children of " + finishedWorkerPath, e);
+            } catch (InterruptedException e) {
+                throw new IllegalStateException(
+                    "barrierOnWorkerList: IllegalException - Couldn't get " +
+                    "children of " + finishedWorkerPath, e);
+            }
+            if (LOG.isDebugEnabled()) {
+                LOG.debug("barrierOnWorkerList: Got finished worker list = " +
+                          finishedHostnameIdList + ", size = " +
+                          finishedHostnameIdList.size() +
+                          ", worker list = " +
+                          workerInfoList + ", size = " +
+                          workerInfoList.size() +
+                          " from " + finishedWorkerPath);
+            }
+
+            if (LOG.isInfoEnabled() &&
+                    (System.currentTimeMillis() > nextInfoMillis)) {
+                nextInfoMillis = System.currentTimeMillis() + 30000;
+                LOG.info("barrierOnWorkerList: " +
+                         finishedHostnameIdList.size() +
+                         " out of " + workerInfoList.size() +
+                         " workers finished on superstep " +
+                         getSuperstep() + " on path " + finishedWorkerPath);
+            }
+            getContext().setStatus(getGraphMapper().getMapFunctions() + " - " +
+                                   finishedHostnameIdList.size() +
+                                   " finished out of " +
+                                   workerInfoList.size() +
+                                   " on superstep " + getSuperstep());
+            if (finishedHostnameIdList.containsAll(hostnameIdList)) {
+                break;
+            }
+
+            // Wait for a signal or no more than 60 seconds to progress
+            // or else will continue.
+            event.waitMsecs(60*1000);
+            event.reset();
+            getContext().progress();
+
+            // Did a worker die?
+            try {
+                if ((getSuperstep() > 0) &&
+                        !superstepChosenWorkerAlive(
+                            workerInfoHealthyPath,
+                            workerInfoList)) {
+                    return false;
+                }
+            } catch (KeeperException e) {
+                throw new IllegalStateException(
+                    "barrierOnWorkerList: KeeperException - " +
+                    "Couldn't get " + workerInfoHealthyPath, e);
+            } catch (InterruptedException e) {
+                throw new IllegalStateException(
+                    "barrierOnWorkerList: InterruptedException - " +
+                    "Couldn't get " + workerInfoHealthyPath, e);
+            }
+        }
+
+        return true;
+    }
+
+
+    @Override
+    public SuperstepState coordinateSuperstep() throws
+            KeeperException, InterruptedException {
+        // 1. Get chosen workers and set up watches on them.
+        // 2. Assign partitions to the workers
+        //    (possibly reloading from a superstep)
+        // 3. Wait for all workers to complete
+        // 4. Collect and process aggregators
+        // 5. Create superstep finished node
+        // 6. If the checkpoint frequency is met, finalize the checkpoint
+        List<WorkerInfo> chosenWorkerInfoList = checkWorkers();
+        if (chosenWorkerInfoList == null) {
+            LOG.fatal("coordinateSuperstep: Not enough healthy workers for " +
+                      "superstep " + getSuperstep());
+            setJobState(ApplicationState.FAILED, -1, -1);
+        } else {
+            for (WorkerInfo workerInfo : chosenWorkerInfoList) {
+                String workerInfoHealthyPath =
+                    getWorkerInfoHealthyPath(getApplicationAttempt(),
+                                             getSuperstep()) + "/" +
+                                             workerInfo.getHostnameId();
+                if (getZkExt().exists(workerInfoHealthyPath, true) == null) {
+                    LOG.warn("coordinateSuperstep: Chosen worker " +
+                             workerInfoHealthyPath +
+                             " is no longer valid, failing superstep");
+                }
+            }
+        }
+
+        currentWorkersCounter.increment(chosenWorkerInfoList.size() -
+                                        currentWorkersCounter.getValue());
+        assignPartitionOwners(allPartitionStatsList,
+                              chosenWorkerInfoList,
+                              masterGraphPartitioner);
+
+        if (getSuperstep() == INPUT_SUPERSTEP) {
+            // Coordinate the workers finishing sending their vertices to the
+            // correct workers and signal when everything is done.
+            if (!barrierOnWorkerList(INPUT_SPLIT_DONE_PATH,
+                                     chosenWorkerInfoList,
+                                     getInputSplitsDoneStateChangedEvent())) {
+                throw new IllegalStateException(
+                    "coordinateSuperstep: Worker failed during input split " +
+                    "(currently not supported)");
+            }
+            try {
+                getZkExt().create(INPUT_SPLITS_ALL_DONE_PATH,
+                            null,
+                            Ids.OPEN_ACL_UNSAFE,
+                            CreateMode.PERSISTENT);
+            } catch (KeeperException.NodeExistsException e) {
+                LOG.info("coordinateInputSplits: Node " +
+                         INPUT_SPLITS_ALL_DONE_PATH + " already exists.");
+            } catch (KeeperException e) {
+                throw new IllegalStateException(
+                    "coordinateInputSplits: KeeperException", e);
+            } catch (InterruptedException e) {
+                throw new IllegalStateException(
+                    "coordinateInputSplits: IllegalStateException", e);
+            }
+        }
+
+        String finishedWorkerPath =
+            getWorkerFinishedPath(getApplicationAttempt(), getSuperstep());
+        if (!barrierOnWorkerList(finishedWorkerPath,
+                                 chosenWorkerInfoList,
+                                 getSuperstepStateChangedEvent())) {
+            return SuperstepState.WORKER_FAILURE;
+        }
+
+        collectAndProcessAggregatorValues(getSuperstep());
+        GlobalStats globalStats = aggregateWorkerStats(getSuperstep());
+
+        // Let everyone know the aggregated application state through the
+        // superstep finishing znode.
+        String superstepFinishedNode =
+            getSuperstepFinishedPath(getApplicationAttempt(), getSuperstep());
+        WritableUtils.writeToZnode(
+            getZkExt(), superstepFinishedNode, -1, globalStats);
+        vertexCounter.increment(
+            globalStats.getVertexCount() -
+            vertexCounter.getValue());
+        finishedVertexCounter.increment(
+            globalStats.getFinishedVertexCount() -
+            finishedVertexCounter.getValue());
+        edgeCounter.increment(
+            globalStats.getEdgeCount() -
+            edgeCounter.getValue());
+        sentMessagesCounter.increment(
+            globalStats.getMessageCount() -
+            sentMessagesCounter.getValue());
+
+        // Finalize the valid checkpoint file prefixes and possibly
+        // the aggregators.
+        if (checkpointFrequencyMet(getSuperstep())) {
+            try {
+                finalizeCheckpoint(getSuperstep(), chosenWorkerInfoList);
+            } catch (IOException e) {
+                throw new IllegalStateException(
+                    "coordinateSuperstep: IOException on finalizing checkpoint",
+                    e);
+            }
+        }
+
+        // Clean up the old supersteps (always keep this one)
+        long removeableSuperstep = getSuperstep() - 1;
+        if ((getConfiguration().getBoolean(
+                GiraphJob.KEEP_ZOOKEEPER_DATA,
+                GiraphJob.KEEP_ZOOKEEPER_DATA_DEFAULT) == false) &&
+                (removeableSuperstep >= 0)) {
+            String oldSuperstepPath =
+                getSuperstepPath(getApplicationAttempt()) + "/" +
+                (removeableSuperstep);
+            try {
+                if (LOG.isInfoEnabled()) {
+                    LOG.info("coordinateSuperstep: Cleaning up old Superstep " +
+                             oldSuperstepPath);
+                }
+                getZkExt().deleteExt(oldSuperstepPath,
+                                     -1,
+                                     true);
+            } catch (KeeperException.NoNodeException e) {
+                LOG.warn("coordinateBarrier: Already cleaned up " +
+                         oldSuperstepPath);
+            } catch (KeeperException e) {
+                throw new IllegalStateException(
+                    "coordinateSuperstep: KeeperException on " +
+                    "finalizing checkpoint", e);
+            }
+        }
+        incrCachedSuperstep();
+        // Counter starts at zero, so no need to increment
+        if (getSuperstep() > 0) {
+            superstepCounter.increment(1);
+        }
+        SuperstepState superstepState;
+        if ((globalStats.getFinishedVertexCount() ==
+                globalStats.getVertexCount()) &&
+                globalStats.getMessageCount() == 0) {
+            superstepState = SuperstepState.ALL_SUPERSTEPS_DONE;
+        } else {
+            superstepState = SuperstepState.THIS_SUPERSTEP_DONE;
+        }
+        try {
+            aggregatorWriter.writeAggregator(getAggregatorMap(),
+                (superstepState == SuperstepState.ALL_SUPERSTEPS_DONE) ? 
+                    AggregatorWriter.LAST_SUPERSTEP : getSuperstep());
+        } catch (IOException e) {
+            throw new IllegalStateException(
+                "coordinateSuperstep: IOException while " +
+                "writing aggregators data", e);
+        }
+        
+        return superstepState;
+    }
+
+    /**
+     * Need to clean up ZooKeeper nicely.  Make sure all the masters and workers
+     * have reported ending their ZooKeeper connections.
+     */
+    private void cleanUpZooKeeper() {
+        try {
+            getZkExt().createExt(CLEANED_UP_PATH,
+                                 null,
+                                 Ids.OPEN_ACL_UNSAFE,
+                                 CreateMode.PERSISTENT,
+                                 true);
+        } catch (KeeperException.NodeExistsException e) {
+            if (LOG.isInfoEnabled()) {
+                LOG.info("cleanUpZooKeeper: Node " + CLEANED_UP_PATH +
+                " already exists, no need to create.");
+            }
+        } catch (KeeperException e) {
+            throw new IllegalStateException(
+                "cleanupZooKeeper: Got KeeperException", e);
+        } catch (InterruptedException e) {
+            throw new IllegalStateException(
+                "cleanupZooKeeper: Got IllegalStateException", e);
+        }
+        // Need to wait for the number of workers and masters to complete
+        int maxTasks = BspInputFormat.getMaxTasks(getConfiguration());
+        if ((getGraphMapper().getMapFunctions() == MapFunctions.ALL) ||
+                (getGraphMapper().getMapFunctions() ==
+                    MapFunctions.ALL_EXCEPT_ZOOKEEPER)) {
+            maxTasks *= 2;
+        }
+        List<String> cleanedUpChildrenList = null;
+        while (true) {
+            try {
+                cleanedUpChildrenList =
+                    getZkExt().getChildrenExt(
+                        CLEANED_UP_PATH, true, false, true);
+                if (LOG.isInfoEnabled()) {
+                    LOG.info("cleanUpZooKeeper: Got " +
+                             cleanedUpChildrenList.size() + " of " +
+                             maxTasks  +  " desired children from " +
+                             CLEANED_UP_PATH);
+                }
+                if (cleanedUpChildrenList.size() == maxTasks) {
+                    break;
+                }
+                if (LOG.isInfoEnabled()) {
+                    LOG.info("cleanedUpZooKeeper: Waiting for the " +
+                             "children of " + CLEANED_UP_PATH +
+                             " to change since only got " +
+                             cleanedUpChildrenList.size() + " nodes.");
+                }
+            }
+            catch (Exception e) {
+                // We are in the cleanup phase -- just log the error
+                LOG.error("cleanUpZooKeeper: Got exception, but will continue",
+                          e);
+                return;
+            }
+
+            getCleanedUpChildrenChangedEvent().waitForever();
+            getCleanedUpChildrenChangedEvent().reset();
+        }
+
+         // At this point, all processes have acknowledged the cleanup,
+         // and the master can do any final cleanup
+        try {
+            if (getConfiguration().getBoolean(
+                GiraphJob.KEEP_ZOOKEEPER_DATA,
+                GiraphJob.KEEP_ZOOKEEPER_DATA_DEFAULT) == false) {
+                if (LOG.isInfoEnabled()) {
+                    LOG.info("cleanupZooKeeper: Removing the following path " +
+                             "and all children - " + BASE_PATH);
+                }
+                getZkExt().deleteExt(BASE_PATH, -1, true);
+            }
+        } catch (Exception e) {
+            LOG.error("cleanupZooKeeper: Failed to do cleanup of " +
+                      BASE_PATH, e);
+        }
+    }
+
+    @Override
+    public void cleanup() throws IOException {
+        // All master processes should denote they are done by adding special
+        // znode.  Once the number of znodes equals the number of partitions
+        // for workers and masters, the master will clean up the ZooKeeper
+        // znodes associated with this job.
+        String cleanedUpPath = CLEANED_UP_PATH  + "/" +
+            getTaskPartition() + MASTER_SUFFIX;
+        try {
+            String finalFinishedPath =
+                getZkExt().createExt(cleanedUpPath,
+                                     null,
+                                     Ids.OPEN_ACL_UNSAFE,
+                                     CreateMode.PERSISTENT,
+                                     true);
+            if (LOG.isInfoEnabled()) {
+                LOG.info("cleanup: Notifying master its okay to cleanup with " +
+                         finalFinishedPath);
+            }
+        } catch (KeeperException.NodeExistsException e) {
+            if (LOG.isInfoEnabled()) {
+                LOG.info("cleanup: Couldn't create finished node '" +
+                         cleanedUpPath);
+            }
+        } catch (KeeperException e) {
+            LOG.error("cleanup: Got KeeperException, continuing", e);
+        } catch (InterruptedException e) {
+            LOG.error("cleanup: Got InterruptedException, continuing", e);
+        }
+
+        if (isMaster) {
+            cleanUpZooKeeper();
+            // If desired, cleanup the checkpoint directory
+            if (getConfiguration().getBoolean(
+                    GiraphJob.CLEANUP_CHECKPOINTS_AFTER_SUCCESS,
+                    GiraphJob.CLEANUP_CHECKPOINTS_AFTER_SUCCESS_DEFAULT)) {
+                boolean success =
+                    getFs().delete(new Path(CHECKPOINT_BASE_PATH), true);
+                if (LOG.isInfoEnabled()) {
+                    LOG.info("cleanup: Removed HDFS checkpoint directory (" +
+                             CHECKPOINT_BASE_PATH + ") with return = " +
+                             success + " since this job succeeded ");
+                }
+            }
+            aggregatorWriter.close();
+        }
+
+        try {
+            getZkExt().close();
+        } catch (InterruptedException e) {
+            // cleanup phase -- just log the error
+            LOG.error("cleanup: Zookeeper failed to close", e);
+        }
+    }
+
+    /**
+     * Event that the master watches that denotes if a worker has done something
+     * that changes the state of a superstep (either a worker completed or died)
+     *
+     * @return Event that denotes a superstep state change
+     */
+    final public BspEvent getSuperstepStateChangedEvent() {
+        return superstepStateChanged;
+    }
+
+    /**
+     * Should this worker failure cause the current superstep to fail?
+     *
+     * @param failedWorkerPath Full path to the failed worker
+     */
+    final private void checkHealthyWorkerFailure(String failedWorkerPath) {
+        if (getSuperstepFromPath(failedWorkerPath) < getSuperstep()) {
+            return;
+        }
+
+        Collection<PartitionOwner> partitionOwners =
+            masterGraphPartitioner.getCurrentPartitionOwners();
+        String hostnameId =
+            getHealthyHostnameIdFromPath(failedWorkerPath);
+        for (PartitionOwner partitionOwner : partitionOwners) {
+            WorkerInfo workerInfo = partitionOwner.getWorkerInfo();
+            WorkerInfo previousWorkerInfo =
+                partitionOwner.getPreviousWorkerInfo();
+            if (workerInfo.getHostnameId().equals(hostnameId) ||
+                ((previousWorkerInfo != null) &&
+                    previousWorkerInfo.getHostnameId().equals(hostnameId))) {
+                LOG.warn("checkHealthyWorkerFailure: " +
+                        "at least one healthy worker went down " +
+                        "for superstep " + getSuperstep() + " - " +
+                        hostnameId + ", will try to restart from " +
+                        "checkpointed superstep " +
+                        lastCheckpointedSuperstep);
+                superstepStateChanged.signal();
+            }
+        }
+    }
+
+    @Override
+    public boolean processEvent(WatchedEvent event) {
+        boolean foundEvent = false;
+        if (event.getPath().contains(WORKER_HEALTHY_DIR) &&
+                (event.getType() == EventType.NodeDeleted)) {
+            if (LOG.isDebugEnabled()) {
+                LOG.debug("processEvent: Healthy worker died (node deleted) " +
+                          "in " + event.getPath());
+            }
+            checkHealthyWorkerFailure(event.getPath());
+            superstepStateChanged.signal();
+            foundEvent = true;
+        } else if (event.getPath().contains(WORKER_FINISHED_DIR) &&
+                event.getType() == EventType.NodeChildrenChanged) {
+            if (LOG.isDebugEnabled()) {
+                LOG.debug("processEvent: Worker finished (node change) " +
+                          "event - superstepStateChanged signaled");
+            }
+            superstepStateChanged.signal();
+            foundEvent = true;
+        }
+
+        return foundEvent;
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/BspServiceWorker.java b/src/main/java/org/apache/giraph/graph/BspServiceWorker.java
new file mode 100644
index 0000000..2775819
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/BspServiceWorker.java
@@ -0,0 +1,1484 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import net.iharder.Base64;
+
+import org.apache.giraph.bsp.ApplicationState;
+import org.apache.giraph.bsp.CentralizedServiceWorker;
+import org.apache.giraph.comm.RPCCommunications;
+import org.apache.giraph.comm.ServerInterface;
+import org.apache.giraph.graph.partition.Partition;
+import org.apache.giraph.graph.partition.PartitionExchange;
+import org.apache.giraph.graph.partition.PartitionOwner;
+import org.apache.giraph.graph.partition.PartitionStats;
+import org.apache.giraph.graph.partition.WorkerGraphPartitioner;
+import org.apache.giraph.utils.MemoryUtils;
+import org.apache.giraph.utils.WritableUtils;
+import org.apache.giraph.zk.BspEvent;
+import org.apache.giraph.zk.PredicateLock;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.hadoop.util.ReflectionUtils;
+import org.apache.log4j.Logger;
+import org.apache.zookeeper.CreateMode;
+import org.apache.zookeeper.KeeperException;
+import org.apache.zookeeper.WatchedEvent;
+import org.apache.zookeeper.Watcher.Event.EventType;
+import org.apache.zookeeper.ZooDefs.Ids;
+import org.apache.zookeeper.data.Stat;
+import org.json.JSONArray;
+import org.json.JSONException;
+import org.json.JSONObject;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutput;
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.UnknownHostException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.TreeSet;
+
+/**
+ * ZooKeeper-based implementation of {@link CentralizedServiceWorker}.
+ */
+@SuppressWarnings("rawtypes")
+public class BspServiceWorker<
+        I extends WritableComparable,
+        V extends Writable,
+        E extends Writable,
+        M extends Writable>
+        extends BspService<I, V, E, M>
+        implements CentralizedServiceWorker<I, V, E, M> {
+    /** Number of input splits */
+    private int inputSplitCount = -1;
+    /** My process health znode */
+    private String myHealthZnode;
+    /** List of aggregators currently in use */
+    private Set<String> aggregatorInUse = new TreeSet<String>();
+    /** Worker info */
+    private final WorkerInfo workerInfo;
+    /** Worker graph partitioner */
+    private final WorkerGraphPartitioner<I, V, E, M> workerGraphPartitioner;
+    /** Input split vertex cache (only used when loading from input split) */
+    private final Map<PartitionOwner, Partition<I, V, E, M>>
+        inputSplitCache = new HashMap<PartitionOwner, Partition<I, V, E, M>>();
+    /** Communication service */
+    private final ServerInterface<I, V, E, M> commService;
+    /** Structure to store the partitions on this worker */
+    private final Map<Integer, Partition<I, V, E, M>> workerPartitionMap =
+        new HashMap<Integer, Partition<I, V, E, M>>();
+    /** Have the partition exchange children (workers) changed? */
+    private final BspEvent partitionExchangeChildrenChanged =
+        new PredicateLock();
+    /** Max vertices per partition before sending */
+    private final int maxVerticesPerPartition;
+    /** Worker Context */
+    private final WorkerContext workerContext;
+    /** Total vertices loaded */
+    private long totalVerticesLoaded = 0;
+    /** Total edges loaded */
+    private long totalEdgesLoaded = 0;
+    /** Input split max vertices (-1 denotes all) */
+    private final long inputSplitMaxVertices;
+    /** Class logger */
+    private static final Logger LOG = Logger.getLogger(BspServiceWorker.class);
+
+    public BspServiceWorker(
+            String serverPortList,
+            int sessionMsecTimeout,
+            Mapper<?, ?, ?, ?>.Context context,
+            GraphMapper<I, V, E, M> graphMapper,
+            GraphState<I, V, E,M> graphState)
+            throws UnknownHostException, IOException, InterruptedException {
+        super(serverPortList, sessionMsecTimeout, context, graphMapper);
+        registerBspEvent(partitionExchangeChildrenChanged);
+        int finalRpcPort =
+            getConfiguration().getInt(GiraphJob.RPC_INITIAL_PORT,
+                                      GiraphJob.RPC_INITIAL_PORT_DEFAULT) +
+                                      getTaskPartition();
+        maxVerticesPerPartition =
+            getConfiguration().getInt(
+                GiraphJob.MAX_VERTICES_PER_PARTITION,
+                GiraphJob.MAX_VERTICES_PER_PARTITION_DEFAULT);
+        inputSplitMaxVertices =
+            getConfiguration().getLong(
+                GiraphJob.INPUT_SPLIT_MAX_VERTICES,
+                GiraphJob.INPUT_SPLIT_MAX_VERTICES_DEFAULT);
+        workerInfo =
+            new WorkerInfo(getHostname(), getTaskPartition(), finalRpcPort);
+        workerGraphPartitioner =
+            getGraphPartitionerFactory().createWorkerGraphPartitioner();
+        commService = new RPCCommunications<I, V, E, M>(
+            context, this, graphState);
+        graphState.setWorkerCommunications(commService);
+        this.workerContext =
+            BspUtils.createWorkerContext(getConfiguration(),
+                                         graphMapper.getGraphState());
+    }
+
+    public WorkerContext getWorkerContext() {
+    	return workerContext;
+    }
+
+    /**
+     * Intended to check the health of the node.  For instance, can it ssh,
+     * dmesg, etc. For now, does nothing.
+     */
+    public boolean isHealthy() {
+        return true;
+    }
+
+    /**
+     * Use an aggregator in this superstep.
+     *
+     * @param name
+     * @return boolean (false when aggregator not registered)
+     */
+    public boolean useAggregator(String name) {
+        if (getAggregatorMap().get(name) == null) {
+            LOG.error("userAggregator: Aggregator=" + name + " not registered");
+            return false;
+        }
+        aggregatorInUse.add(name);
+        return true;
+    }
+
+    /**
+     * Try to reserve an InputSplit for loading.  While InputSplits exists that
+     * are not finished, wait until they are.
+     *
+     * @return reserved InputSplit or null if no unfinished InputSplits exist
+     */
+    private String reserveInputSplit() {
+        List<String> inputSplitPathList = null;
+        try {
+            inputSplitPathList =
+                getZkExt().getChildrenExt(INPUT_SPLIT_PATH, false, false, true);
+            if (inputSplitCount == -1) {
+                inputSplitCount = inputSplitPathList.size();
+            }
+        } catch (Exception e) {
+            throw new RuntimeException(e);
+        }
+        String reservedInputSplitPath = null;
+        Stat reservedStat = null;
+        while (true) {
+            int finishedInputSplits = 0;
+            for (int i = 0; i < inputSplitPathList.size(); ++i) {
+                String tmpInputSplitFinishedPath =
+                    inputSplitPathList.get(i) + INPUT_SPLIT_FINISHED_NODE;
+                try {
+                    reservedStat =
+                        getZkExt().exists(tmpInputSplitFinishedPath, true);
+                } catch (Exception e) {
+                    throw new RuntimeException(e);
+                }
+                if (reservedStat != null) {
+                    ++finishedInputSplits;
+                    continue;
+                }
+
+                String tmpInputSplitReservedPath =
+                    inputSplitPathList.get(i) + INPUT_SPLIT_RESERVED_NODE;
+                try {
+                    reservedStat =
+                        getZkExt().exists(tmpInputSplitReservedPath, true);
+                } catch (Exception e) {
+                    throw new RuntimeException(e);
+                }
+                if (reservedStat == null) {
+                    try {
+                        // Attempt to reserve this InputSplit
+                        getZkExt().createExt(tmpInputSplitReservedPath,
+                                       null,
+                                       Ids.OPEN_ACL_UNSAFE,
+                                       CreateMode.EPHEMERAL,
+                                       false);
+                        reservedInputSplitPath = inputSplitPathList.get(i);
+                        if (LOG.isInfoEnabled()) {
+                            float percentFinished =
+                               finishedInputSplits * 100.0f /
+                               inputSplitPathList.size();
+                            LOG.info("reserveInputSplit: Reserved input " +
+                                     "split path " + reservedInputSplitPath +
+                                     ", overall roughly " +
+                                      + percentFinished +
+                                     "% input splits finished");
+                        }
+                        return reservedInputSplitPath;
+                    } catch (KeeperException.NodeExistsException e) {
+                        LOG.info("reserveInputSplit: Couldn't reserve " +
+                                 "(already reserved) inputSplit" +
+                                 " at " + tmpInputSplitReservedPath);
+                    } catch (KeeperException e) {
+                        throw new IllegalStateException(
+                            "reserveInputSplit: KeeperException on reserve", e);
+                    } catch (InterruptedException e) {
+                        throw new IllegalStateException(
+                            "reserveInputSplit: InterruptedException " +
+                            "on reserve", e);
+                    }
+                }
+            }
+            if (LOG.isInfoEnabled()) {
+                LOG.info("reserveInputSplit: reservedPath = " +
+                         reservedInputSplitPath + ", " + finishedInputSplits +
+                         " of " + inputSplitPathList.size() +
+                         " InputSplits are finished.");
+            }
+            if (finishedInputSplits == inputSplitPathList.size()) {
+                return null;
+            }
+            // Wait for either a reservation to go away or a notification that
+            // an InputSplit has finished.
+            getInputSplitsStateChangedEvent().waitMsecs(60*1000);
+            getInputSplitsStateChangedEvent().reset();
+        }
+    }
+
+
+
+    /**
+     * Load the vertices from the user-defined VertexReader into our partitions
+     * of vertex ranges.  Do this until all the InputSplits have been processed.
+     * All workers will try to do as many InputSplits as they can.  The master
+     * will monitor progress and stop this once all the InputSplits have been
+     * loaded and check-pointed.  Keep track of the last input split path to
+     * ensure the input split cache is flushed prior to marking the last input
+     * split complete.
+     *
+     * @throws IOException
+     * @throws IllegalAccessException
+     * @throws InstantiationException
+     * @throws ClassNotFoundException
+     * @throws InterruptedException
+     */
+    private VertexEdgeCount loadVertices() throws IOException,
+            ClassNotFoundException,
+            InterruptedException, InstantiationException,
+            IllegalAccessException {
+        String inputSplitPath = null;
+        VertexEdgeCount vertexEdgeCount = new VertexEdgeCount();
+        while ((inputSplitPath = reserveInputSplit()) != null) {
+            vertexEdgeCount = vertexEdgeCount.incrVertexEdgeCount(
+                loadVerticesFromInputSplit(inputSplitPath));
+        }
+
+        // Flush the remaining cached vertices
+        for (Entry<PartitionOwner, Partition<I, V, E, M>> entry :
+                inputSplitCache.entrySet()) {
+            if (!entry.getValue().getVertices().isEmpty()) {
+                commService.sendPartitionReq(entry.getKey().getWorkerInfo(),
+                                             entry.getValue());
+                entry.getValue().getVertices().clear();
+            }
+        }
+        inputSplitCache.clear();
+
+        return vertexEdgeCount;
+    }
+
+    /**
+     * Mark an input split path as completed by this worker.  This notifies
+     * the master and the other workers that this input split has not only
+     * been reserved, but also marked processed.
+     *
+     * @param inputSplitPath Path to the input split.
+     */
+    private void markInputSplitPathFinished(String inputSplitPath) {
+        String inputSplitFinishedPath =
+            inputSplitPath + INPUT_SPLIT_FINISHED_NODE;
+        try {
+            getZkExt().createExt(inputSplitFinishedPath,
+                    null,
+                    Ids.OPEN_ACL_UNSAFE,
+                    CreateMode.PERSISTENT,
+                    true);
+        } catch (KeeperException.NodeExistsException e) {
+            LOG.warn("loadVertices: " + inputSplitFinishedPath +
+                    " already exists!");
+        } catch (KeeperException e) {
+            throw new IllegalStateException(
+                "loadVertices: KeeperException on " +
+                inputSplitFinishedPath, e);
+        } catch (InterruptedException e) {
+            throw new IllegalStateException(
+                "loadVertices: InterruptedException on " +
+                inputSplitFinishedPath, e);
+        }
+    }
+
+    /**
+     * Extract vertices from input split, saving them into a mini cache of
+     * partitions.  Periodically flush the cache of vertices when a limit is
+     * reached in readVerticeFromInputSplit.
+     * Mark the input split finished when done.
+     *
+     * @param inputSplitPath ZK location of input split
+     * @return Mapping of vertex indices and statistics, or null if no data read
+     * @throws IOException
+     * @throws ClassNotFoundException
+     * @throws InterruptedException
+     * @throws InstantiationException
+     * @throws IllegalAccessException
+     */
+    private VertexEdgeCount loadVerticesFromInputSplit(String inputSplitPath)
+        throws IOException, ClassNotFoundException, InterruptedException,
+               InstantiationException, IllegalAccessException {
+        InputSplit inputSplit = getInputSplitForVertices(inputSplitPath);
+        VertexEdgeCount vertexEdgeCount =
+            readVerticesFromInputSplit(inputSplit);
+        if (LOG.isInfoEnabled()) {
+            LOG.info("loadVerticesFromInputSplit: Finished loading " +
+                     inputSplitPath + " " + vertexEdgeCount);
+        }
+        markInputSplitPathFinished(inputSplitPath);
+        return vertexEdgeCount;
+    }
+
+    /**
+     * Talk to ZooKeeper to convert the input split path to the actual
+     * InputSplit containing the vertices to read.
+     *
+     * @param inputSplitPath Location in ZK of input split
+     * @return instance of InputSplit containing vertices to read
+     * @throws IOException
+     * @throws ClassNotFoundException
+     */
+    private InputSplit getInputSplitForVertices(String inputSplitPath)
+            throws IOException, ClassNotFoundException {
+        byte[] splitList;
+        try {
+            splitList = getZkExt().getData(inputSplitPath, false, null);
+        } catch (KeeperException e) {
+            throw new IllegalStateException(
+               "loadVertices: KeeperException on " + inputSplitPath, e);
+        } catch (InterruptedException e) {
+            throw new IllegalStateException(
+                "loadVertices: IllegalStateException on " + inputSplitPath, e);
+        }
+        getContext().progress();
+
+        DataInputStream inputStream =
+            new DataInputStream(new ByteArrayInputStream(splitList));
+        String inputSplitClass = Text.readString(inputStream);
+        InputSplit inputSplit = (InputSplit)
+            ReflectionUtils.newInstance(
+                getConfiguration().getClassByName(inputSplitClass),
+                getConfiguration());
+        ((Writable) inputSplit).readFields(inputStream);
+
+        if (LOG.isInfoEnabled()) {
+            LOG.info("getInputSplitForVertices: Reserved " + inputSplitPath +
+                 " from ZooKeeper and got input split '" +
+                 inputSplit.toString() + "'");
+        }
+        return inputSplit;
+    }
+
+    /**
+     * Read vertices from input split.  If testing, the user may request a
+     * maximum number of vertices to be read from an input split.
+     *
+     * @param inputSplit Input split to process with vertex reader
+     * @return List of vertices.
+     * @throws IOException
+     * @throws InterruptedException
+     */
+    private VertexEdgeCount readVerticesFromInputSplit(
+            InputSplit inputSplit) throws IOException, InterruptedException {
+        VertexInputFormat<I, V, E, M> vertexInputFormat =
+            BspUtils.<I, V, E, M>createVertexInputFormat(getConfiguration());
+        VertexReader<I, V, E, M> vertexReader =
+            vertexInputFormat.createVertexReader(inputSplit, getContext());
+        vertexReader.initialize(inputSplit, getContext());
+        long vertexCount = 0;
+        long edgeCount = 0;
+        while (vertexReader.nextVertex()) {
+            BasicVertex<I, V, E, M> readerVertex =
+                vertexReader.getCurrentVertex();
+            if (readerVertex.getVertexId() == null) {
+                throw new IllegalArgumentException(
+                    "loadVertices: Vertex reader returned a vertex " +
+                    "without an id!  - " + readerVertex);
+            }
+            if (readerVertex.getVertexValue() == null) {
+                readerVertex.setVertexValue(
+                    BspUtils.<V>createVertexValue(getConfiguration()));
+            }
+            PartitionOwner partitionOwner =
+                workerGraphPartitioner.getPartitionOwner(
+                    readerVertex.getVertexId());
+            Partition<I, V, E, M> partition =
+                inputSplitCache.get(partitionOwner);
+            if (partition == null) {
+                partition = new Partition<I, V, E, M>(
+                    getConfiguration(),
+                    partitionOwner.getPartitionId());
+                inputSplitCache.put(partitionOwner, partition);
+            }
+            BasicVertex<I, V, E, M> oldVertex =
+                partition.putVertex(readerVertex);
+            if (oldVertex != null) {
+                LOG.warn("readVertices: Replacing vertex " + oldVertex +
+                        " with " + readerVertex);
+            }
+            if (partition.getVertices().size() >= maxVerticesPerPartition) {
+                commService.sendPartitionReq(partitionOwner.getWorkerInfo(),
+                                             partition);
+                partition.getVertices().clear();
+            }
+            ++vertexCount;
+            edgeCount += readerVertex.getNumOutEdges();
+            getContext().progress();
+
+            ++totalVerticesLoaded;
+            totalEdgesLoaded += readerVertex.getNumOutEdges();
+            // Update status every half a million vertices
+            if ((totalVerticesLoaded % 500000) == 0) {
+                String status = "readVerticesFromInputSplit: Loaded " +
+                    totalVerticesLoaded + " vertices and " +
+                    totalEdgesLoaded + " edges " +
+                    MemoryUtils.getRuntimeMemoryStats() + " " +
+                    getGraphMapper().getMapFunctions().toString() +
+                    " - Attempt=" + getApplicationAttempt() +
+                    ", Superstep=" + getSuperstep();
+                if (LOG.isInfoEnabled()) {
+                    LOG.info(status);
+                }
+                getContext().setStatus(status);
+            }
+
+            // For sampling, or to limit outlier input splits, the number of
+            // records per input split can be limited
+            if ((inputSplitMaxVertices > 0) &&
+                    (vertexCount >= inputSplitMaxVertices)) {
+                if (LOG.isInfoEnabled()) {
+                    LOG.info("readVerticesFromInputSplit: Leaving the input " +
+                            "split early, reached maximum vertices " +
+                            vertexCount);
+                }
+                break;
+            }
+        }
+        vertexReader.close();
+
+        return new VertexEdgeCount(vertexCount, edgeCount);
+    }
+
+    @Override
+    public void assignMessagesToVertex(BasicVertex<I, V, E, M> vertex,
+            Iterable<M> messageIterator) {
+        vertex.putMessages(messageIterator);
+    }
+
+    @Override
+    public void setup() {
+        // Unless doing a restart, prepare for computation:
+        // 1. Start superstep INPUT_SUPERSTEP (no computation)
+        // 2. Wait until the INPUT_SPLIT_ALL_READY_PATH node has been created
+        // 3. Process input splits until there are no more.
+        // 4. Wait until the INPUT_SPLIT_ALL_DONE_PATH node has been created
+        // 5. Wait for superstep INPUT_SUPERSTEP to complete.
+        if (getRestartedSuperstep() != UNSET_SUPERSTEP) {
+            setCachedSuperstep(getRestartedSuperstep());
+            return;
+        }
+
+        JSONObject jobState = getJobState();
+        if (jobState != null) {
+            try {
+                if ((ApplicationState.valueOf(jobState.getString(JSONOBJ_STATE_KEY)) ==
+                        ApplicationState.START_SUPERSTEP) &&
+                        jobState.getLong(JSONOBJ_SUPERSTEP_KEY) ==
+                        getSuperstep()) {
+                    if (LOG.isInfoEnabled()) {
+                        LOG.info("setup: Restarting from an automated " +
+                                 "checkpointed superstep " +
+                                 getSuperstep() + ", attempt " +
+                                 getApplicationAttempt());
+                    }
+                    setRestartedSuperstep(getSuperstep());
+                    return;
+                }
+            } catch (JSONException e) {
+                throw new RuntimeException(
+                    "setup: Failed to get key-values from " +
+                    jobState.toString(), e);
+            }
+        }
+
+        // Add the partitions for that this worker owns
+        Collection<? extends PartitionOwner> masterSetPartitionOwners =
+            startSuperstep();
+        workerGraphPartitioner.updatePartitionOwners(
+            getWorkerInfo(), masterSetPartitionOwners, getPartitionMap());
+
+        commService.setup();
+
+        // Ensure the InputSplits are ready for processing before processing
+        while (true) {
+            Stat inputSplitsReadyStat;
+            try {
+                inputSplitsReadyStat =
+                    getZkExt().exists(INPUT_SPLITS_ALL_READY_PATH, true);
+            } catch (KeeperException e) {
+                throw new IllegalStateException(
+                    "setup: KeeperException waiting on input splits", e);
+            } catch (InterruptedException e) {
+                throw new IllegalStateException(
+                    "setup: InterruptedException waiting on input splits", e);
+            }
+            if (inputSplitsReadyStat != null) {
+                break;
+            }
+            getInputSplitsAllReadyEvent().waitForever();
+            getInputSplitsAllReadyEvent().reset();
+        }
+
+        getContext().progress();
+
+        try {
+            VertexEdgeCount vertexEdgeCount = loadVertices();
+            if (LOG.isInfoEnabled()) {
+                LOG.info("setup: Finally loaded a total of " +
+                         vertexEdgeCount);
+            }
+        } catch (Exception e) {
+            LOG.error("setup: loadVertices failed - ", e);
+            throw new IllegalStateException("setup: loadVertices failed", e);
+        }
+        getContext().progress();
+
+        // Workers wait for each other to finish, coordinated by master
+        String workerDonePath =
+            INPUT_SPLIT_DONE_PATH + "/" + getWorkerInfo().getHostnameId();
+        try {
+            getZkExt().createExt(workerDonePath,
+                                 null,
+                                 Ids.OPEN_ACL_UNSAFE,
+                                 CreateMode.PERSISTENT,
+                                 true);
+        } catch (KeeperException e) {
+            throw new IllegalStateException(
+                "setup: KeeperException creating worker done splits", e);
+        } catch (InterruptedException e) {
+            throw new IllegalStateException(
+                "setup: InterruptedException creating worker done splits", e);
+        }
+        while (true) {
+            Stat inputSplitsDoneStat;
+            try {
+                inputSplitsDoneStat =
+                    getZkExt().exists(INPUT_SPLITS_ALL_DONE_PATH, true);
+            } catch (KeeperException e) {
+                throw new IllegalStateException(
+                    "setup: KeeperException waiting on worker done splits", e);
+            } catch (InterruptedException e) {
+                throw new IllegalStateException(
+                    "setup: InterruptedException waiting on worker " +
+                    "done splits", e);
+            }
+            if (inputSplitsDoneStat != null) {
+                break;
+            }
+            getInputSplitsAllDoneEvent().waitForever();
+            getInputSplitsAllDoneEvent().reset();
+        }
+
+        // At this point all vertices have been sent to their destinations.
+        // Move them to the worker, creating creating the empty partitions
+        movePartitionsToWorker(commService);
+        for (PartitionOwner partitionOwner : masterSetPartitionOwners) {
+            if (partitionOwner.getWorkerInfo().equals(getWorkerInfo()) &&
+                !getPartitionMap().containsKey(
+                    partitionOwner.getPartitionId())) {
+                Partition<I, V, E, M> partition =
+                    new Partition<I, V, E, M>(getConfiguration(),
+                                              partitionOwner.getPartitionId());
+                getPartitionMap().put(partitionOwner.getPartitionId(),
+                                      partition);
+            }
+        }
+
+        // Generate the partition stats for the input superstep and process
+        // if necessary
+        List<PartitionStats> partitionStatsList =
+            new ArrayList<PartitionStats>();
+        for (Partition<I, V, E, M> partition : getPartitionMap().values()) {
+            PartitionStats partitionStats =
+                new PartitionStats(partition.getPartitionId(),
+                                   partition.getVertices().size(),
+                                   0,
+                                   partition.getEdgeCount());
+            partitionStatsList.add(partitionStats);
+        }
+        workerGraphPartitioner.finalizePartitionStats(
+            partitionStatsList, workerPartitionMap);
+
+        finishSuperstep(partitionStatsList);
+    }
+
+    /**
+     *  Marshal the aggregator values of to a JSONArray that will later be
+     *  aggregated by master.  Reset the 'use' of aggregators in the next
+     *  superstep
+     *
+     * @param superstep
+     */
+    private JSONArray marshalAggregatorValues(long superstep) {
+        JSONArray aggregatorArray = new JSONArray();
+        if ((superstep == INPUT_SUPERSTEP) || (aggregatorInUse.size() == 0)) {
+            return aggregatorArray;
+        }
+
+        for (String name : aggregatorInUse) {
+            try {
+                Aggregator<Writable> aggregator = getAggregatorMap().get(name);
+                ByteArrayOutputStream outputStream =
+                    new ByteArrayOutputStream();
+                DataOutput output = new DataOutputStream(outputStream);
+                aggregator.getAggregatedValue().write(output);
+
+                JSONObject aggregatorObj = new JSONObject();
+                aggregatorObj.put(AGGREGATOR_NAME_KEY, name);
+                aggregatorObj.put(AGGREGATOR_CLASS_NAME_KEY,
+                                  aggregator.getClass().getName());
+                aggregatorObj.put(
+                    AGGREGATOR_VALUE_KEY,
+                    Base64.encodeBytes(outputStream.toByteArray()));
+                aggregatorArray.put(aggregatorObj);
+                LOG.info("marshalAggregatorValues: " +
+                         "Found aggregatorObj " +
+                         aggregatorObj + ", value (" +
+                         aggregator.getAggregatedValue() + ")");
+            } catch (Exception e) {
+                throw new RuntimeException(e);
+            }
+        }
+
+        if (LOG.isInfoEnabled()) {
+	        LOG.info("marshalAggregatorValues: Finished assembling " +
+	                 "aggregator values in JSONArray - " + aggregatorArray);
+        }
+        aggregatorInUse.clear();
+        return aggregatorArray;
+    }
+
+    /**
+     * Get values of aggregators aggregated by master in previous superstep.
+     *
+     * @param superstep Superstep to get the aggregated values from
+     */
+    private void getAggregatorValues(long superstep) {
+        if (superstep <= (INPUT_SUPERSTEP + 1)) {
+            return;
+        }
+        String mergedAggregatorPath =
+            getMergedAggregatorPath(getApplicationAttempt(), superstep - 1);
+        JSONArray aggregatorArray = null;
+        try {
+            byte[] zkData =
+                getZkExt().getData(mergedAggregatorPath, false, null);
+            aggregatorArray = new JSONArray(new String(zkData));
+        } catch (KeeperException.NoNodeException e) {
+            LOG.info("getAggregatorValues: no aggregators in " +
+                     mergedAggregatorPath + " on superstep " + superstep);
+            return;
+        } catch (Exception e) {
+            throw new RuntimeException(e);
+        }
+        for (int i = 0; i < aggregatorArray.length(); ++i) {
+            try {
+                if (LOG.isDebugEnabled()) {
+                    LOG.debug("getAggregatorValues: " +
+                              "Getting aggregators from " +
+                              aggregatorArray.getJSONObject(i));
+                }
+                String aggregatorName = aggregatorArray.getJSONObject(i).
+                    getString(AGGREGATOR_NAME_KEY);
+                Aggregator<Writable> aggregator =
+                    getAggregatorMap().get(aggregatorName);
+                if (aggregator == null) {
+                    continue;
+                }
+                Writable aggregatorValue = aggregator.getAggregatedValue();
+                InputStream input =
+                    new ByteArrayInputStream(
+                        Base64.decode(aggregatorArray.getJSONObject(i).
+                            getString(AGGREGATOR_VALUE_KEY)));
+                aggregatorValue.readFields(
+                    new DataInputStream(input));
+                aggregator.setAggregatedValue(aggregatorValue);
+                if (LOG.isDebugEnabled()) {
+                    LOG.debug("getAggregatorValues: " +
+                              "Got aggregator=" + aggregatorName + " value=" +
+                               aggregatorValue);
+                }
+            } catch (Exception e) {
+                throw new RuntimeException(e);
+            }
+        }
+        if (LOG.isInfoEnabled()) {
+            LOG.info("getAggregatorValues: Finished loading " +
+                     mergedAggregatorPath + " with aggregator values " +
+                     aggregatorArray);
+        }
+    }
+
+    /**
+     * Register the health of this worker for a given superstep
+     *
+     * @param superstep Superstep to register health on
+     */
+    private void registerHealth(long superstep) {
+        JSONArray hostnamePort = new JSONArray();
+        hostnamePort.put(getHostname());
+
+        hostnamePort.put(workerInfo.getPort());
+
+        String myHealthPath = null;
+        if (isHealthy()) {
+            myHealthPath = getWorkerInfoHealthyPath(getApplicationAttempt(),
+                                                    getSuperstep());
+        }
+        else {
+            myHealthPath = getWorkerInfoUnhealthyPath(getApplicationAttempt(),
+                                                      getSuperstep());
+        }
+        myHealthPath = myHealthPath + "/" + workerInfo.getHostnameId();
+        try {
+            myHealthZnode = getZkExt().createExt(
+                myHealthPath,
+                WritableUtils.writeToByteArray(workerInfo),
+                Ids.OPEN_ACL_UNSAFE,
+                CreateMode.EPHEMERAL,
+                true);
+        } catch (KeeperException.NodeExistsException e) {
+            LOG.warn("registerHealth: myHealthPath already exists (likely " +
+                     "from previous failure): " + myHealthPath +
+                     ".  Waiting for change in attempts " +
+                     "to re-join the application");
+            getApplicationAttemptChangedEvent().waitForever();
+            if (LOG.isInfoEnabled()) {
+                LOG.info("registerHealth: Got application " +
+                         "attempt changed event, killing self");
+            }
+            throw new RuntimeException(
+                "registerHealth: Trying " +
+                "to get the new application attempt by killing self", e);
+        } catch (Exception e) {
+            throw new RuntimeException(e);
+        }
+        if (LOG.isInfoEnabled()) {
+            LOG.info("registerHealth: Created my health node for attempt=" +
+                     getApplicationAttempt() + ", superstep=" +
+                     getSuperstep() + " with " + myHealthZnode +
+                     " and workerInfo= " + workerInfo);
+        }
+    }
+
+    /**
+     * Do this to help notify the master quicker that this worker has failed.
+     */
+    private void unregisterHealth() {
+        LOG.error("unregisterHealth: Got failure, unregistering health on " +
+                  myHealthZnode + " on superstep " + getSuperstep());
+        try {
+            getZkExt().delete(myHealthZnode, -1);
+        } catch (InterruptedException e) {
+            throw new IllegalStateException(
+                "unregisterHealth: InterruptedException - Couldn't delete " +
+                myHealthZnode, e);
+        } catch (KeeperException e) {
+            throw new IllegalStateException(
+                "unregisterHealth: KeeperException - Couldn't delete " +
+                myHealthZnode, e);
+        }
+    }
+
+    @Override
+    public void failureCleanup() {
+        unregisterHealth();
+    }
+
+    @Override
+    public Collection<? extends PartitionOwner> startSuperstep() {
+        // Algorithm:
+        // 1. Communication service will combine message from previous
+        //    superstep
+        // 2. Register my health for the next superstep.
+        // 3. Wait until the partition assignment is complete and get it
+        // 4. Get the aggregator values from the previous superstep
+        if (getSuperstep() != INPUT_SUPERSTEP) {
+            commService.prepareSuperstep();
+        }
+
+        registerHealth(getSuperstep());
+
+        String partitionAssignmentsNode =
+            getPartitionAssignmentsPath(getApplicationAttempt(),
+                                        getSuperstep());
+        Collection<? extends PartitionOwner> masterSetPartitionOwners;
+        try {
+            while (getZkExt().exists(partitionAssignmentsNode, true) ==
+                    null) {
+                getPartitionAssignmentsReadyChangedEvent().waitForever();
+                getPartitionAssignmentsReadyChangedEvent().reset();
+            }
+            List<? extends Writable> writableList =
+                WritableUtils.readListFieldsFromZnode(
+                    getZkExt(),
+                    partitionAssignmentsNode,
+                    false,
+                    null,
+                    workerGraphPartitioner.createPartitionOwner().getClass(),
+                    getConfiguration());
+
+            @SuppressWarnings("unchecked")
+            Collection<? extends PartitionOwner> castedWritableList =
+                (Collection<? extends PartitionOwner>) writableList;
+            masterSetPartitionOwners = castedWritableList;
+        } catch (KeeperException e) {
+            throw new IllegalStateException(
+                "startSuperstep: KeeperException getting assignments", e);
+        } catch (InterruptedException e) {
+            throw new IllegalStateException(
+                "startSuperstep: InterruptedException getting assignments", e);
+        }
+
+        if (LOG.isInfoEnabled()) {
+            LOG.info("startSuperstep: Ready for computation on superstep " +
+                     getSuperstep() + " since worker " +
+                     "selection and vertex range assignments are done in " +
+                     partitionAssignmentsNode);
+        }
+
+        if (getSuperstep() != INPUT_SUPERSTEP) {
+            getAggregatorValues(getSuperstep());
+        }
+        getContext().setStatus("startSuperstep: " +
+                               getGraphMapper().getMapFunctions().toString() +
+                               " - Attempt=" + getApplicationAttempt() +
+                               ", Superstep=" + getSuperstep());
+        return masterSetPartitionOwners;
+    }
+
+    @Override
+    public boolean finishSuperstep(List<PartitionStats> partitionStatsList) {
+        // This barrier blocks until success (or the master signals it to
+        // restart).
+        //
+        // Master will coordinate the barriers and aggregate "doneness" of all
+        // the vertices.  Each worker will:
+        // 1. Flush the unsent messages
+        // 2. Execute user postSuperstep() if necessary.
+        // 3. Save aggregator values that are in use.
+        // 4. Report the statistics (vertices, edges, messages, etc.)
+        //    of this worker
+        // 5. Let the master know it is finished.
+        // 6. Wait for the master's global stats, and check if done
+        long workerSentMessages = 0;
+        try {
+            workerSentMessages = commService.flush(getContext());
+        } catch (IOException e) {
+            throw new IllegalStateException(
+                "finishSuperstep: flush failed", e);
+        }
+
+        if (getSuperstep() != INPUT_SUPERSTEP) {
+            getWorkerContext().postSuperstep();
+            getContext().progress();
+        }
+
+        if (LOG.isInfoEnabled()) {
+            LOG.info("finishSuperstep: Superstep " + getSuperstep() + " " +
+                      MemoryUtils.getRuntimeMemoryStats());
+        }
+
+        JSONArray aggregatorValueArray =
+            marshalAggregatorValues(getSuperstep());
+        Collection<PartitionStats> finalizedPartitionStats =
+            workerGraphPartitioner.finalizePartitionStats(
+                partitionStatsList, workerPartitionMap);
+        List<PartitionStats> finalizedPartitionStatsList =
+            new ArrayList<PartitionStats>(finalizedPartitionStats);
+        byte [] partitionStatsBytes =
+            WritableUtils.writeListToByteArray(finalizedPartitionStatsList);
+        JSONObject workerFinishedInfoObj = new JSONObject();
+        try {
+            workerFinishedInfoObj.put(JSONOBJ_AGGREGATOR_VALUE_ARRAY_KEY,
+                                      aggregatorValueArray);
+            workerFinishedInfoObj.put(JSONOBJ_PARTITION_STATS_KEY,
+                                      Base64.encodeBytes(partitionStatsBytes));
+            workerFinishedInfoObj.put(JSONOBJ_NUM_MESSAGES_KEY,
+                                      workerSentMessages);
+        } catch (JSONException e) {
+            throw new RuntimeException(e);
+        }
+        String finishedWorkerPath =
+            getWorkerFinishedPath(getApplicationAttempt(), getSuperstep()) +
+            "/" + getHostnamePartitionId();
+        try {
+            getZkExt().createExt(finishedWorkerPath,
+                                 workerFinishedInfoObj.toString().getBytes(),
+                                 Ids.OPEN_ACL_UNSAFE,
+                                 CreateMode.PERSISTENT,
+                                 true);
+        } catch (KeeperException.NodeExistsException e) {
+            LOG.warn("finishSuperstep: finished worker path " +
+                     finishedWorkerPath + " already exists!");
+        } catch (Exception e) {
+            throw new RuntimeException(e);
+        }
+
+        getContext().setStatus("finishSuperstep: (waiting for rest " +
+                               "of workers) " +
+                               getGraphMapper().getMapFunctions().toString() +
+                               " - Attempt=" + getApplicationAttempt() +
+                               ", Superstep=" + getSuperstep());
+
+        String superstepFinishedNode =
+            getSuperstepFinishedPath(getApplicationAttempt(), getSuperstep());
+        try {
+            while (getZkExt().exists(superstepFinishedNode, true) == null) {
+                getSuperstepFinishedEvent().waitForever();
+                getSuperstepFinishedEvent().reset();
+            }
+        } catch (KeeperException e) {
+            throw new IllegalStateException(
+                "finishSuperstep: Failed while waiting for master to " +
+                "signal completion of superstep " + getSuperstep(), e);
+        } catch (InterruptedException e) {
+            throw new IllegalStateException(
+                "finishSuperstep: Failed while waiting for master to " +
+                "signal completion of superstep " + getSuperstep(), e);
+        }
+        GlobalStats globalStats = new GlobalStats();
+        WritableUtils.readFieldsFromZnode(
+            getZkExt(), superstepFinishedNode, false, null, globalStats);
+        if (LOG.isInfoEnabled()) {
+            LOG.info("finishSuperstep: Completed superstep " + getSuperstep() +
+                     " with global stats " + globalStats);
+        }
+        incrCachedSuperstep();
+        getContext().setStatus("finishSuperstep: (all workers done) " +
+                               getGraphMapper().getMapFunctions().toString() +
+                               " - Attempt=" + getApplicationAttempt() +
+                               ", Superstep=" + getSuperstep());
+        getGraphMapper().getGraphState().
+            setNumEdges(globalStats.getEdgeCount()).
+            setNumVertices(globalStats.getVertexCount());
+        return ((globalStats.getFinishedVertexCount() ==
+                globalStats.getVertexCount()) &&
+                (globalStats.getMessageCount() == 0));
+    }
+
+    /**
+     * Save the vertices using the user-defined VertexOutputFormat from our
+     * vertexArray based on the split.
+     * @throws InterruptedException
+     */
+    private void saveVertices() throws IOException, InterruptedException {
+        if (getConfiguration().get(GiraphJob.VERTEX_OUTPUT_FORMAT_CLASS)
+                == null) {
+            LOG.warn("saveVertices: " + GiraphJob.VERTEX_OUTPUT_FORMAT_CLASS +
+                     " not specified -- there will be no saved output");
+            return;
+        }
+
+        VertexOutputFormat<I, V, E> vertexOutputFormat =
+            BspUtils.<I, V, E>createVertexOutputFormat(getConfiguration());
+        VertexWriter<I, V, E> vertexWriter =
+            vertexOutputFormat.createVertexWriter(getContext());
+        vertexWriter.initialize(getContext());
+        for (Partition<I, V, E, M> partition : workerPartitionMap.values()) {
+            for (BasicVertex<I, V, E, M> vertex : partition.getVertices()) {
+                vertexWriter.writeVertex(vertex);
+            }
+        }
+        vertexWriter.close(getContext());
+    }
+
+    @Override
+    public void cleanup() throws IOException, InterruptedException {
+        commService.closeConnections();
+        setCachedSuperstep(getSuperstep() - 1);
+        saveVertices();
+         // All worker processes should denote they are done by adding special
+         // znode.  Once the number of znodes equals the number of partitions
+         // for workers and masters, the master will clean up the ZooKeeper
+         // znodes associated with this job.
+        String cleanedUpPath = CLEANED_UP_PATH  + "/" +
+            getTaskPartition() + WORKER_SUFFIX;
+        try {
+            String finalFinishedPath =
+                getZkExt().createExt(cleanedUpPath,
+                                     null,
+                                     Ids.OPEN_ACL_UNSAFE,
+                                     CreateMode.PERSISTENT,
+                                     true);
+            if (LOG.isInfoEnabled()) {
+                LOG.info("cleanup: Notifying master its okay to cleanup with " +
+                     finalFinishedPath);
+            }
+        } catch (KeeperException.NodeExistsException e) {
+            if (LOG.isInfoEnabled()) {
+                LOG.info("cleanup: Couldn't create finished node '" +
+                         cleanedUpPath);
+            }
+        } catch (KeeperException e) {
+            // Cleaning up, it's okay to fail after cleanup is successful
+            LOG.error("cleanup: Got KeeperException on notifcation " +
+                      "to master about cleanup", e);
+        } catch (InterruptedException e) {
+            // Cleaning up, it's okay to fail after cleanup is successful
+            LOG.error("cleanup: Got InterruptedException on notifcation " +
+                      "to master about cleanup", e);
+        }
+        try {
+            getZkExt().close();
+        } catch (InterruptedException e) {
+            // cleanup phase -- just log the error
+            LOG.error("cleanup: Zookeeper failed to close with " + e);
+        }
+
+        // Preferably would shut down the service only after
+        // all clients have disconnected (or the exceptions on the
+        // client side ignored).
+        commService.close();
+    }
+
+    @Override
+    public void storeCheckpoint() throws IOException {
+        getContext().setStatus("storeCheckpoint: Starting checkpoint " +
+                getGraphMapper().getMapFunctions().toString() +
+                " - Attempt=" + getApplicationAttempt() +
+                ", Superstep=" + getSuperstep());
+
+        // Algorithm:
+        // For each partition, dump vertices and messages
+        Path metadataFilePath =
+            new Path(getCheckpointBasePath(getSuperstep()) + "." +
+                     getHostnamePartitionId() +
+                     CHECKPOINT_METADATA_POSTFIX);
+        Path verticesFilePath =
+            new Path(getCheckpointBasePath(getSuperstep()) + "." +
+                     getHostnamePartitionId() +
+                     CHECKPOINT_VERTICES_POSTFIX);
+        Path validFilePath =
+            new Path(getCheckpointBasePath(getSuperstep()) + "." +
+                     getHostnamePartitionId() +
+                     CHECKPOINT_VALID_POSTFIX);
+
+        // Remove these files if they already exist (shouldn't though, unless
+        // of previous failure of this worker)
+        if (getFs().delete(validFilePath, false)) {
+            LOG.warn("storeCheckpoint: Removed valid file " +
+                     validFilePath);
+        }
+        if (getFs().delete(metadataFilePath, false)) {
+            LOG.warn("storeCheckpoint: Removed metadata file " +
+                     metadataFilePath);
+        }
+        if (getFs().delete(verticesFilePath, false)) {
+            LOG.warn("storeCheckpoint: Removed file " + verticesFilePath);
+        }
+
+        FSDataOutputStream verticesOutputStream =
+            getFs().create(verticesFilePath);
+        ByteArrayOutputStream metadataByteStream = new ByteArrayOutputStream();
+        DataOutput metadataOutput = new DataOutputStream(metadataByteStream);
+        for (Partition<I, V, E, M> partition : workerPartitionMap.values()) {
+            long startPos = verticesOutputStream.getPos();
+            partition.write(verticesOutputStream);
+            // Write the metadata for this partition
+            // Format:
+            // <index count>
+            //   <index 0 start pos><partition id>
+            //   <index 1 start pos><partition id>
+            metadataOutput.writeLong(startPos);
+            metadataOutput.writeInt(partition.getPartitionId());
+            if (LOG.isDebugEnabled()) {
+                LOG.debug("storeCheckpoint: Vertex file starting " +
+                          "offset = " + startPos + ", length = " +
+                          (verticesOutputStream.getPos() - startPos) +
+                          ", partition = " + partition.toString());
+            }
+        }
+        // Metadata is buffered and written at the end since it's small and
+        // needs to know how many partitions this worker owns
+        FSDataOutputStream metadataOutputStream =
+            getFs().create(metadataFilePath);
+        metadataOutputStream.writeInt(workerPartitionMap.size());
+        metadataOutputStream.write(metadataByteStream.toByteArray());
+        metadataOutputStream.close();
+        verticesOutputStream.close();
+        if (LOG.isInfoEnabled()) {
+            LOG.info("storeCheckpoint: Finished metadata (" +
+                     metadataFilePath + ") and vertices (" + verticesFilePath
+                     + ").");
+        }
+
+        getFs().createNewFile(validFilePath);
+    }
+
+    @Override
+    public void loadCheckpoint(long superstep) {
+        // Algorithm:
+        // Examine all the partition owners and load the ones
+        // that match my hostname and id from the master designated checkpoint
+        // prefixes.
+        long startPos = 0;
+        int loadedPartitions = 0;
+        for (PartitionOwner partitionOwner :
+                workerGraphPartitioner.getPartitionOwners()) {
+            if (partitionOwner.getWorkerInfo().equals(getWorkerInfo())) {
+                String metadataFile =
+                    partitionOwner.getCheckpointFilesPrefix() +
+                    CHECKPOINT_METADATA_POSTFIX;
+                String partitionsFile =
+                    partitionOwner.getCheckpointFilesPrefix() +
+                    CHECKPOINT_VERTICES_POSTFIX;
+                try {
+                    int partitionId = -1;
+                    DataInputStream metadataStream =
+                        getFs().open(new Path(metadataFile));
+                    int partitions = metadataStream.readInt();
+                    for (int i = 0; i < partitions; ++i) {
+                        startPos = metadataStream.readLong();
+                        partitionId = metadataStream.readInt();
+                        if (partitionId == partitionOwner.getPartitionId()) {
+                            break;
+                        }
+                    }
+                    if (partitionId != partitionOwner.getPartitionId()) {
+                        throw new IllegalStateException(
+                           "loadCheckpoint: " + partitionOwner +
+                           " not found!");
+                    }
+                    metadataStream.close();
+                    Partition<I, V, E, M> partition =
+                        new Partition<I, V, E, M>(
+                            getConfiguration(),
+                            partitionId);
+                    DataInputStream partitionsStream =
+                        getFs().open(new Path(partitionsFile));
+                    if (partitionsStream.skip(startPos) != startPos) {
+                        throw new IllegalStateException(
+                            "loadCheckpoint: Failed to skip " + startPos +
+                            " on " + partitionsFile);
+                    }
+                    partition.readFields(partitionsStream);
+                    partitionsStream.close();
+                    if (LOG.isInfoEnabled()) {
+                        LOG.info("loadCheckpoint: Loaded partition " +
+                                 partition);
+                    }
+                    if (getPartitionMap().put(partitionId, partition) != null) {
+                        throw new IllegalStateException(
+                            "loadCheckpoint: Already has partition owner " +
+                            partitionOwner);
+                    }
+                    ++loadedPartitions;
+                } catch (IOException e) {
+                    throw new RuntimeException(
+                        "loadCheckpoing: Failed to get partition owner " +
+                        partitionOwner, e);
+                }
+            }
+        }
+        if (LOG.isInfoEnabled()) {
+            LOG.info("loadCheckpoint: Loaded " + loadedPartitions +
+                    " partitions of out " +
+                    workerGraphPartitioner.getPartitionOwners().size() +
+                    " total.");
+        }
+        // Communication service needs to setup the connections prior to
+        // processing vertices
+        commService.setup();
+    }
+
+    /**
+     * Send the worker partitions to their destination workers
+     *
+     * @param workerPartitionMap Map of worker info to the partitions stored
+     *        on this worker to be sent
+     */
+    private void sendWorkerPartitions(
+            Map<WorkerInfo, List<Integer>> workerPartitionMap) {
+        List<Entry<WorkerInfo, List<Integer>>> randomEntryList =
+            new ArrayList<Entry<WorkerInfo, List<Integer>>>(
+                workerPartitionMap.entrySet());
+        Collections.shuffle(randomEntryList);
+        for (Entry<WorkerInfo, List<Integer>> workerPartitionList :
+                randomEntryList) {
+            for (Integer partitionId : workerPartitionList.getValue()) {
+                Partition<I, V, E, M> partition =
+                    getPartitionMap().get(partitionId);
+                if (partition == null) {
+                    throw new IllegalStateException(
+                        "sendWorkerPartitions: Couldn't find partition " +
+                        partitionId + " to send to " +
+                        workerPartitionList.getKey());
+                }
+                if (LOG.isInfoEnabled()) {
+                    LOG.info("sendWorkerPartitions: Sending worker " +
+                             workerPartitionList.getKey() + " partition " +
+                             partitionId);
+                }
+                getGraphMapper().getGraphState().getWorkerCommunications().
+                sendPartitionReq(workerPartitionList.getKey(),
+                                 partition);
+                getPartitionMap().remove(partitionId);
+            }
+        }
+
+        String myPartitionExchangeDonePath =
+            getPartitionExchangeWorkerPath(
+                getApplicationAttempt(), getSuperstep(), getWorkerInfo());
+        try {
+            getZkExt().createExt(myPartitionExchangeDonePath,
+                    null,
+                    Ids.OPEN_ACL_UNSAFE,
+                    CreateMode.PERSISTENT,
+                    true);
+        } catch (KeeperException e) {
+            throw new IllegalStateException(
+                "sendWorkerPartitions: KeeperException to create " +
+                myPartitionExchangeDonePath, e);
+        } catch (InterruptedException e) {
+            throw new IllegalStateException(
+                "sendWorkerPartitions: InterruptedException to create " +
+                myPartitionExchangeDonePath, e);
+        }
+        if (LOG.isInfoEnabled()) {
+            LOG.info("sendWorkerPartitions: Done sending all my partitions.");
+        }
+    }
+
+    @Override
+    public final void exchangeVertexPartitions(
+            Collection<? extends PartitionOwner> masterSetPartitionOwners) {
+        // 1. Fix the addresses of the partition ids if they have changed.
+        // 2. Send all the partitions to their destination workers in a random
+        //    fashion.
+        // 3. Notify completion with a ZooKeeper stamp
+        // 4. Wait for all my dependencies to be done (if any)
+        // 5. Add the partitions to myself.
+        PartitionExchange partitionExchange =
+            workerGraphPartitioner.updatePartitionOwners(
+                getWorkerInfo(), masterSetPartitionOwners, getPartitionMap());
+        commService.fixPartitionIdToSocketAddrMap();
+
+        Map<WorkerInfo, List<Integer>> workerPartitionMap =
+            partitionExchange.getSendWorkerPartitionMap();
+        if (!workerPartitionMap.isEmpty()) {
+            sendWorkerPartitions(workerPartitionMap);
+        }
+
+        Set<WorkerInfo> myDependencyWorkerSet =
+            partitionExchange.getMyDependencyWorkerSet();
+        Set<String> workerIdSet = new HashSet<String>();
+        for (WorkerInfo workerInfo : myDependencyWorkerSet) {
+            if (workerIdSet.add(workerInfo.getHostnameId()) != true) {
+                throw new IllegalStateException(
+                    "exchangeVertexPartitions: Duplicate entry " + workerInfo);
+            }
+        }
+        if (myDependencyWorkerSet.isEmpty() && workerPartitionMap.isEmpty()) {
+            if (LOG.isInfoEnabled()) {
+                LOG.info("exchangeVertexPartitions: Nothing to exchange, " +
+                         "exiting early");
+            }
+            return;
+        }
+
+        String vertexExchangePath =
+            getPartitionExchangePath(getApplicationAttempt(), getSuperstep());
+        List<String> workerDoneList;
+        try {
+            while (true) {
+                workerDoneList = getZkExt().getChildrenExt(
+                    vertexExchangePath, true, false, false);
+                workerIdSet.removeAll(workerDoneList);
+                if (workerIdSet.isEmpty()) {
+                    break;
+                }
+                if (LOG.isInfoEnabled()) {
+                    LOG.info("exchangeVertexPartitions: Waiting for workers " +
+                             workerIdSet);
+                }
+                getPartitionExchangeChildrenChangedEvent().waitForever();
+                getPartitionExchangeChildrenChangedEvent().reset();
+            }
+        } catch (KeeperException e) {
+            throw new RuntimeException(e);
+        } catch (InterruptedException e) {
+            throw new RuntimeException(e);
+        }
+
+        if (LOG.isInfoEnabled()) {
+            LOG.info("exchangeVertexPartitions: Done with exchange.");
+        }
+
+        // Add the partitions sent earlier
+        movePartitionsToWorker(commService);
+    }
+
+    /**
+     * Partitions that are exchanged need to be moved from the communication
+     * service to the worker.
+     *
+     * @param commService Communication service where the partitions are
+     *        temporarily stored.
+     */
+    private void movePartitionsToWorker(
+            ServerInterface<I, V, E, M> commService) {
+        Map<Integer, List<BasicVertex<I, V, E, M>>> inPartitionVertexMap =
+                commService.getInPartitionVertexMap();
+        synchronized (inPartitionVertexMap) {
+            for (Entry<Integer, List<BasicVertex<I, V, E, M>>> entry :
+                    inPartitionVertexMap.entrySet()) {
+                if (getPartitionMap().containsKey(entry.getKey())) {
+                    throw new IllegalStateException(
+                        "moveVerticesToWorker: Already has partition " +
+                        getPartitionMap().get(entry.getKey()) +
+                        ", cannot receive vertex list of size " +
+                        entry.getValue().size());
+                }
+
+                Partition<I, V, E, M> tmpPartition =
+                    new Partition<I, V, E, M>(getConfiguration(),
+                                              entry.getKey());
+                for (BasicVertex<I, V, E, M> vertex : entry.getValue()) {
+                    if (tmpPartition.putVertex(vertex) != null) {
+                        throw new IllegalStateException(
+                            "moveVerticesToWorker: Vertex " + vertex +
+                            " already exists!");
+                    }
+                }
+                if (LOG.isInfoEnabled()) {
+                    LOG.info("moveVerticesToWorker: Adding " +
+                            entry.getValue().size() +
+                            " vertices for partition id " + entry.getKey());
+                }
+                getPartitionMap().put(tmpPartition.getPartitionId(),
+                                      tmpPartition);
+                entry.getValue().clear();
+            }
+            inPartitionVertexMap.clear();
+        }
+    }
+
+    final public BspEvent getPartitionExchangeChildrenChangedEvent() {
+        return partitionExchangeChildrenChanged;
+    }
+
+    @Override
+    protected boolean processEvent(WatchedEvent event) {
+        boolean foundEvent = false;
+        if (event.getPath().startsWith(MASTER_JOB_STATE_PATH) &&
+                (event.getType() == EventType.NodeChildrenChanged)) {
+            if (LOG.isInfoEnabled()) {
+                LOG.info("processEvent: Job state changed, checking " +
+                         "to see if it needs to restart");
+            }
+            JSONObject jsonObj = getJobState();
+            try {
+                if ((ApplicationState.valueOf(jsonObj.getString(JSONOBJ_STATE_KEY)) ==
+                        ApplicationState.START_SUPERSTEP) &&
+                        jsonObj.getLong(JSONOBJ_APPLICATION_ATTEMPT_KEY) !=
+                        getApplicationAttempt()) {
+                    LOG.fatal("processEvent: Worker will restart " +
+                              "from command - " + jsonObj.toString());
+                    System.exit(-1);
+                }
+            } catch (JSONException e) {
+                throw new RuntimeException(
+                    "processEvent: Couldn't properly get job state from " +
+                    jsonObj.toString());
+            }
+            foundEvent = true;
+        } else if (event.getPath().contains(PARTITION_EXCHANGE_DIR) &&
+                   event.getType() == EventType.NodeChildrenChanged) {
+            if (LOG.isInfoEnabled()) {
+                LOG.info("processEvent : partitionExchangeChildrenChanged " +
+                         "(at least one worker is done sending partitions)");
+            }
+            partitionExchangeChildrenChanged.signal();
+            foundEvent = true;
+        }
+
+        return foundEvent;
+    }
+
+    @Override
+    public WorkerInfo getWorkerInfo() {
+        return workerInfo;
+    }
+
+    @Override
+    public Map<Integer, Partition<I, V, E, M>> getPartitionMap() {
+        return workerPartitionMap;
+    }
+
+    @Override
+    public Collection<? extends PartitionOwner> getPartitionOwners() {
+        return workerGraphPartitioner.getPartitionOwners();
+    }
+
+    @Override
+    public PartitionOwner getVertexPartitionOwner(I vertexIndex) {
+        return workerGraphPartitioner.getPartitionOwner(vertexIndex);
+    }
+
+    public Partition<I, V, E, M> getPartition(I vertexIndex) {
+        PartitionOwner partitionOwner = getVertexPartitionOwner(vertexIndex);
+        return workerPartitionMap.get(partitionOwner.getPartitionId());
+    }
+
+    @Override
+    public BasicVertex<I, V, E, M> getVertex(I vertexIndex) {
+        PartitionOwner partitionOwner = getVertexPartitionOwner(vertexIndex);
+        if (workerPartitionMap.containsKey(partitionOwner.getPartitionId())) {
+            return workerPartitionMap.get(
+                partitionOwner.getPartitionId()).getVertex(vertexIndex);
+        } else {
+            return null;
+        }
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/BspUtils.java b/src/main/java/org/apache/giraph/graph/BspUtils.java
new file mode 100644
index 0000000..828f325
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/BspUtils.java
@@ -0,0 +1,454 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import org.apache.giraph.graph.partition.GraphPartitionerFactory;
+import org.apache.giraph.graph.partition.HashPartitionerFactory;
+import org.apache.giraph.graph.partition.PartitionStats;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.util.ReflectionUtils;
+
+/**
+ * Help to use the configuration to get the appropriate classes or
+ * instantiate them.
+ */
+public class BspUtils {
+    /**
+     * Get the user's subclassed {@link GraphPartitionerFactory}.
+     *
+     * @param conf Configuration to check
+     * @return User's graph partitioner
+     */
+    @SuppressWarnings({ "rawtypes", "unchecked" })
+    public static <I extends WritableComparable, V extends Writable,
+                   E extends Writable, M extends Writable>
+            Class<? extends GraphPartitionerFactory<I, V, E, M>>
+            getGraphPartitionerClass(Configuration conf) {
+        return (Class<? extends GraphPartitionerFactory<I, V, E, M>>)
+                conf.getClass(GiraphJob.GRAPH_PARTITIONER_FACTORY_CLASS,
+                              HashPartitionerFactory.class,
+                              GraphPartitionerFactory.class);
+    }
+
+    /**
+     * Create a user graph partitioner class
+     *
+     * @param conf Configuration to check
+     * @return Instantiated user graph partitioner class
+     */
+    @SuppressWarnings("rawtypes")
+    public static <I extends WritableComparable, V extends Writable,
+            E extends Writable, M extends Writable>
+            GraphPartitionerFactory<I, V, E, M>
+            createGraphPartitioner(Configuration conf) {
+        Class<? extends GraphPartitionerFactory<I, V, E, M>>
+            graphPartitionerFactoryClass =
+            getGraphPartitionerClass(conf);
+        return ReflectionUtils.newInstance(graphPartitionerFactoryClass, conf);
+    }
+
+    /**
+     * Create a user graph partitioner partition stats class
+     *
+     * @param conf Configuration to check
+     * @return Instantiated user graph partition stats class
+     */
+    @SuppressWarnings("rawtypes")
+    public static <I extends WritableComparable, V extends Writable,
+            E extends Writable, M extends Writable>
+            PartitionStats createGraphPartitionStats(Configuration conf) {
+        GraphPartitionerFactory<I, V, E, M> graphPartitioner =
+            createGraphPartitioner(conf);
+        return graphPartitioner.createMasterGraphPartitioner().
+            createPartitionStats();
+    }
+
+    /**
+     * Get the user's subclassed {@link VertexInputFormat}.
+     *
+     * @param conf Configuration to check
+     * @return User's vertex input format class
+     */
+    @SuppressWarnings({ "rawtypes", "unchecked" })
+    public static <I extends WritableComparable,
+                   V extends Writable,
+                   E extends Writable,
+                   M extends Writable>
+            Class<? extends VertexInputFormat<I, V, E, M>>
+            getVertexInputFormatClass(Configuration conf) {
+        return (Class<? extends VertexInputFormat<I, V, E, M>>)
+                conf.getClass(GiraphJob.VERTEX_INPUT_FORMAT_CLASS,
+                              null,
+                              VertexInputFormat.class);
+    }
+
+    /**
+     * Create a user vertex input format class
+     *
+     * @param conf Configuration to check
+     * @return Instantiated user vertex input format class
+     */
+    @SuppressWarnings("rawtypes")
+    public static <I extends WritableComparable,
+                   V extends Writable,
+                   E extends Writable,
+                   M extends Writable> VertexInputFormat<I, V, E, M>
+            createVertexInputFormat(Configuration conf) {
+        Class<? extends VertexInputFormat<I, V, E, M>> vertexInputFormatClass =
+            getVertexInputFormatClass(conf);
+        VertexInputFormat<I, V, E, M> inputFormat =
+            ReflectionUtils.newInstance(vertexInputFormatClass, conf);
+        return inputFormat;
+    }
+
+    /**
+     * Get the user's subclassed {@link VertexOutputFormat}.
+     *
+     * @param conf Configuration to check
+     * @return User's vertex output format class
+     */
+    @SuppressWarnings({ "rawtypes", "unchecked" })
+    public static <I extends WritableComparable,
+                   V extends Writable,
+                   E extends Writable>
+            Class<? extends VertexOutputFormat<I, V, E>>
+            getVertexOutputFormatClass(Configuration conf) {
+        return (Class<? extends VertexOutputFormat<I, V, E>>)
+                conf.getClass(GiraphJob.VERTEX_OUTPUT_FORMAT_CLASS,
+                              null,
+                              VertexOutputFormat.class);
+    }
+
+    /**
+     * Create a user vertex output format class
+     *
+     * @param conf Configuration to check
+     * @return Instantiated user vertex output format class
+     */
+    @SuppressWarnings("rawtypes")
+    public static <I extends WritableComparable, V extends Writable,
+            E extends Writable> VertexOutputFormat<I, V, E>
+            createVertexOutputFormat(Configuration conf) {
+        Class<? extends VertexOutputFormat<I, V, E>> vertexOutputFormatClass =
+            getVertexOutputFormatClass(conf);
+        return ReflectionUtils.newInstance(vertexOutputFormatClass, conf);
+    }
+
+    /**
+     * Get the user's subclassed {@link AggregatorWriter}.
+     *
+     * @param conf Configuration to check
+     * @return User's aggregator writer class
+     */
+    public static Class<? extends AggregatorWriter>
+            getAggregatorWriterClass(Configuration conf) {
+        return conf.getClass(GiraphJob.AGGREGATOR_WRITER_CLASS,
+                             TextAggregatorWriter.class,
+                             AggregatorWriter.class);
+    }
+
+    /**
+     * Create a user aggregator output format class
+     *
+     * @param conf Configuration to check
+     * @return Instantiated user aggregator writer class
+     */
+    public static AggregatorWriter
+            createAggregatorWriter(Configuration conf) {
+        Class<? extends AggregatorWriter> aggregatorWriterClass =
+            getAggregatorWriterClass(conf);
+        return ReflectionUtils.newInstance(aggregatorWriterClass, conf);
+    }
+
+    /**
+     * Get the user's subclassed {@link VertexCombiner}.
+     *
+     * @param conf Configuration to check
+     * @return User's vertex combiner class
+     */
+    @SuppressWarnings({ "rawtypes", "unchecked" })
+    public static <I extends WritableComparable,
+                   M extends Writable>
+            Class<? extends VertexCombiner<I, M>>
+            getVertexCombinerClass(Configuration conf) {
+        return (Class<? extends VertexCombiner<I, M>>)
+                conf.getClass(GiraphJob.VERTEX_COMBINER_CLASS,
+                              null,
+                              VertexCombiner.class);
+    }
+
+    /**
+     * Create a user vertex combiner class
+     *
+     * @param conf Configuration to check
+     * @return Instantiated user vertex combiner class
+     */
+    @SuppressWarnings("rawtypes")
+    public static <I extends WritableComparable, M extends Writable>
+            VertexCombiner<I, M> createVertexCombiner(Configuration conf) {
+        Class<? extends VertexCombiner<I, M>> vertexCombinerClass =
+            getVertexCombinerClass(conf);
+        return ReflectionUtils.newInstance(vertexCombinerClass, conf);
+    }
+
+    /**
+     * Get the user's subclassed VertexResolver.
+     *
+     * @param conf Configuration to check
+     * @return User's vertex resolver class
+     */
+    @SuppressWarnings({ "unchecked", "rawtypes" })
+    public static <I extends WritableComparable,
+                   V extends Writable,
+                   E extends Writable,
+                   M extends Writable>
+            Class<? extends VertexResolver<I, V, E, M>>
+            getVertexResolverClass(Configuration conf) {
+        return (Class<? extends VertexResolver<I, V, E, M>>)
+                conf.getClass(GiraphJob.VERTEX_RESOLVER_CLASS,
+                              VertexResolver.class,
+                              VertexResolver.class);
+    }
+
+    /**
+     * Create a user vertex revolver
+     *
+     * @param conf Configuration to check
+     * @return Instantiated user vertex resolver
+     */
+    @SuppressWarnings("rawtypes")
+    public static <I extends WritableComparable, V extends Writable,
+            E extends Writable, M extends Writable> VertexResolver<I, V, E, M>
+            createVertexResolver(Configuration conf,
+                                 GraphState<I, V, E, M> graphState) {
+        Class<? extends VertexResolver<I, V, E, M>> vertexResolverClass =
+            getVertexResolverClass(conf);
+        VertexResolver<I, V, E, M> resolver =
+            ReflectionUtils.newInstance(vertexResolverClass, conf);
+        resolver.setGraphState(graphState);
+        return resolver;
+    }
+
+   /**
+     * Get the user's subclassed WorkerContext.
+     *
+     * @param conf Configuration to check
+     * @return User's worker context class
+     */
+	public static Class<? extends WorkerContext>
+            getWorkerContextClass(Configuration conf) {
+        return (Class<? extends WorkerContext>)
+                conf.getClass(GiraphJob.WORKER_CONTEXT_CLASS,
+                              DefaultWorkerContext.class,
+                              WorkerContext.class);
+    }
+
+   /**
+     * Create a user worker context
+     *
+     * @param conf Configuration to check
+     * @return Instantiated user worker context
+     */
+    @SuppressWarnings("rawtypes")
+	public static <I extends WritableComparable,
+    			   V extends Writable,
+    			   E extends Writable,
+    			   M extends Writable>
+    		WorkerContext createWorkerContext(Configuration conf,
+                GraphState<I, V, E, M> graphState) {
+        Class<? extends WorkerContext> workerContextClass =
+            getWorkerContextClass(conf);
+        WorkerContext workerContext =
+             ReflectionUtils.newInstance(workerContextClass, conf);
+        workerContext.setGraphState(graphState);
+        return workerContext;
+    }
+
+
+    /**
+     * Get the user's subclassed {@link BasicVertex}
+     *
+     * @param conf Configuration to check
+     * @return User's vertex class
+     */
+    @SuppressWarnings({ "rawtypes", "unchecked" })
+    public static <I extends WritableComparable,
+                   V extends Writable,
+                   E extends Writable,
+                   M extends Writable>
+            Class<? extends BasicVertex<I, V, E, M>>
+            getVertexClass(Configuration conf) {
+        return (Class<? extends BasicVertex<I, V, E, M>>)
+                conf.getClass(GiraphJob.VERTEX_CLASS,
+                              null,
+                              BasicVertex.class);
+    }
+
+    /**
+     * Create a user vertex
+     *
+     * @param conf Configuration to check
+     * @return Instantiated user vertex
+     */
+    @SuppressWarnings("rawtypes")
+    public static <I extends WritableComparable, V extends Writable,
+            E extends Writable, M extends Writable> BasicVertex<I, V, E, M>
+            createVertex(Configuration conf) {
+        Class<? extends BasicVertex<I, V, E, M>> vertexClass =
+            getVertexClass(conf);
+        BasicVertex<I, V, E, M> vertex =
+            ReflectionUtils.newInstance(vertexClass, conf);
+        return vertex;
+    }
+
+    /**
+     * Get the user's subclassed vertex index class.
+     *
+     * @param conf Configuration to check
+     * @return User's vertex index class
+     */
+    @SuppressWarnings("unchecked")
+    public static <I extends Writable> Class<I>
+            getVertexIndexClass(Configuration conf) {
+        return (Class<I>) conf.getClass(GiraphJob.VERTEX_INDEX_CLASS,
+                                        WritableComparable.class);
+    }
+
+    /**
+     * Create a user vertex index
+     *
+     * @param conf Configuration to check
+     * @return Instantiated user vertex index
+     */
+    @SuppressWarnings("rawtypes")
+    public static <I extends WritableComparable>
+            I createVertexIndex(Configuration conf) {
+        Class<I> vertexClass = getVertexIndexClass(conf);
+        try {
+            return vertexClass.newInstance();
+        } catch (InstantiationException e) {
+            throw new IllegalArgumentException(
+                "createVertexIndex: Failed to instantiate", e);
+        } catch (IllegalAccessException e) {
+            throw new IllegalArgumentException(
+                "createVertexIndex: Illegally accessed", e);
+        }
+    }
+
+    /**
+     * Get the user's subclassed vertex value class.
+     *
+     * @param conf Configuration to check
+     * @return User's vertex value class
+     */
+    @SuppressWarnings("unchecked")
+    public static <V extends Writable> Class<V>
+            getVertexValueClass(Configuration conf) {
+        return (Class<V>) conf.getClass(GiraphJob.VERTEX_VALUE_CLASS,
+                                        Writable.class);
+    }
+
+    /**
+     * Create a user vertex value
+     *
+     * @param conf Configuration to check
+     * @return Instantiated user vertex value
+     */
+    public static <V extends Writable> V
+            createVertexValue(Configuration conf) {
+        Class<V> vertexValueClass = getVertexValueClass(conf);
+        try {
+            return vertexValueClass.newInstance();
+        } catch (InstantiationException e) {
+            throw new IllegalArgumentException(
+                "createVertexValue: Failed to instantiate", e);
+        } catch (IllegalAccessException e) {
+            throw new IllegalArgumentException(
+                "createVertexValue: Illegally accessed", e);
+        }
+    }
+
+    /**
+     * Get the user's subclassed edge value class.
+     *
+     * @param conf Configuration to check
+     * @return User's vertex edge value class
+     */
+    @SuppressWarnings("unchecked")
+    public static <E extends Writable> Class<E>
+            getEdgeValueClass(Configuration conf){
+        return (Class<E>) conf.getClass(GiraphJob.EDGE_VALUE_CLASS,
+                                        Writable.class);
+    }
+
+    /**
+     * Create a user edge value
+     *
+     * @param conf Configuration to check
+     * @return Instantiated user edge value
+     */
+    public static <E extends Writable> E
+            createEdgeValue(Configuration conf) {
+        Class<E> edgeValueClass = getEdgeValueClass(conf);
+        try {
+            return edgeValueClass.newInstance();
+        } catch (InstantiationException e) {
+            throw new IllegalArgumentException(
+                "createEdgeValue: Failed to instantiate", e);
+        } catch (IllegalAccessException e) {
+            throw new IllegalArgumentException(
+                "createEdgeValue: Illegally accessed", e);
+        }
+    }
+
+    /**
+     * Get the user's subclassed vertex message value class.
+     *
+     * @param conf Configuration to check
+     * @return User's vertex message value class
+     */
+    @SuppressWarnings("unchecked")
+    public static <M extends Writable> Class<M>
+            getMessageValueClass(Configuration conf) {
+        return (Class<M>) conf.getClass(GiraphJob.MESSAGE_VALUE_CLASS,
+                                        Writable.class);
+    }
+
+    /**
+     * Create a user vertex message value
+     *
+     * @param conf Configuration to check
+     * @return Instantiated user vertex message value
+     */
+    public static <M extends Writable> M
+            createMessageValue(Configuration conf) {
+        Class<M> messageValueClass = getMessageValueClass(conf);
+        try {
+            return messageValueClass.newInstance();
+        } catch (InstantiationException e) {
+            throw new IllegalArgumentException(
+                "createMessageValue: Failed to instantiate", e);
+        } catch (IllegalAccessException e) {
+            throw new IllegalArgumentException(
+                "createMessageValue: Illegally accessed", e);
+        }
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/DefaultWorkerContext.java b/src/main/java/org/apache/giraph/graph/DefaultWorkerContext.java
new file mode 100644
index 0000000..39f3030
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/DefaultWorkerContext.java
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+/**
+ * A dumb implementation of {@link WorkerContext}. This is the default
+ * implementation when no WorkerContext is defined by the user. It does
+ * nothing.
+ */
+public class DefaultWorkerContext extends WorkerContext {
+
+	@Override
+	public void preApplication() { }
+
+    @Override
+    public void postApplication() { }
+
+    @Override
+    public void preSuperstep() { }
+
+    @Override
+    public void postSuperstep() { }
+}
\ No newline at end of file
diff --git a/src/main/java/org/apache/giraph/graph/Edge.java b/src/main/java/org/apache/giraph/graph/Edge.java
new file mode 100644
index 0000000..b276a8a
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/Edge.java
@@ -0,0 +1,168 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+/**
+ * A complete edge, the destination vertex and the edge value.  Can only be one
+ * edge with a destination vertex id per edge map.
+ *
+ * @param <I> Vertex index
+ * @param <E> Edge value
+ */
+@SuppressWarnings("rawtypes")
+public class Edge<I extends WritableComparable, E extends Writable>
+        implements WritableComparable<Edge<I, E>>, Configurable {
+    /** Destination vertex id */
+    private I destVertexId = null;
+    /** Edge value */
+    private E edgeValue = null;
+    /** Configuration - Used to instantiate classes */
+    private Configuration conf = null;
+
+    /**
+     * Constructor for reflection
+     */
+    public Edge() {}
+
+    /**
+     * Create the edge with final values
+     *
+     * @param destVertexId
+     * @param edgeValue
+     */
+    public Edge(I destVertexId, E edgeValue) {
+        this.destVertexId = destVertexId;
+        this.edgeValue = edgeValue;
+    }
+
+    /**
+     * Get the destination vertex index of this edge
+     *
+     * @return Destination vertex index of this edge
+     */
+    public I getDestVertexId() {
+        return destVertexId;
+    }
+
+    /**
+     * Get the edge value of the edge
+     *
+     * @return Edge value of this edge
+     */
+    public E getEdgeValue() {
+        return edgeValue;
+    }
+
+    /**
+     * Set the destination vertex index of this edge.
+     *
+     * @param destVertexId new destination vertex
+     */
+    public void setDestVertexId(I destVertexId) {
+        this.destVertexId = destVertexId;
+    }
+
+    /**
+     * Set the value for this edge.
+     *
+     * @param edgeValue new edge value
+     */
+    public void setEdgeValue(E edgeValue) {
+        this.edgeValue = edgeValue;
+    }
+
+    @Override
+    public String toString() {
+        return "(DestVertexIndex = " + destVertexId +
+            ", edgeValue = " + edgeValue  + ")";
+    }
+
+    @SuppressWarnings("unchecked")
+    @Override
+    public void readFields(DataInput input) throws IOException {
+        destVertexId = (I) BspUtils.createVertexIndex(getConf());
+        destVertexId.readFields(input);
+        edgeValue = (E) BspUtils.createEdgeValue(getConf());
+        edgeValue.readFields(input);
+    }
+
+    @Override
+    public void write(DataOutput output) throws IOException {
+        if (destVertexId == null) {
+            throw new IllegalStateException(
+                "write: Null destination vertex index");
+        }
+        if (edgeValue == null) {
+            throw new IllegalStateException(
+                "write: Null edge value");
+        }
+        destVertexId.write(output);
+        edgeValue.write(output);
+    }
+
+    @Override
+    public Configuration getConf() {
+        return conf;
+    }
+
+    @Override
+    public void setConf(Configuration conf) {
+        this.conf = conf;
+    }
+
+    @SuppressWarnings("unchecked")
+    @Override
+    public int compareTo(Edge<I, E> edge) {
+        return destVertexId.compareTo(edge.getDestVertexId());
+    }
+
+    @Override
+    public boolean equals(Object o) {
+        if (this == o) { return true; }
+        if (o == null || getClass() != o.getClass()) { return false; }
+
+        Edge edge = (Edge) o;
+
+        if (destVertexId != null ? !destVertexId.equals(edge.destVertexId) :
+            edge.destVertexId != null) {
+            return false;
+        }
+        if (edgeValue != null ? !edgeValue.equals(edge.edgeValue) : edge.edgeValue != null) {
+            return false;
+        }
+
+        return true;
+    }
+
+    @Override
+    public int hashCode() {
+        int result = destVertexId != null ? destVertexId.hashCode() : 0;
+        result = 31 * result + (edgeValue != null ? edgeValue.hashCode() : 0);
+        return result;
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/EdgeListVertex.java b/src/main/java/org/apache/giraph/graph/EdgeListVertex.java
new file mode 100644
index 0000000..0e0d730
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/EdgeListVertex.java
@@ -0,0 +1,312 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import com.google.common.collect.Iterables;
+import org.apache.giraph.utils.ComparisonUtils;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.log4j.Logger;
+
+import com.google.common.collect.Lists;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * User applications can subclass {@link EdgeListVertex}, which stores
+ * the outbound edges in an ArrayList (less memory as the cost of expensive
+ * sorting and random-access lookup).  Good for static graphs.
+ *
+ * @param <I> Vertex index value
+ * @param <V> Vertex value
+ * @param <E> Edge value
+ * @param <M> Message value
+ */
+@SuppressWarnings("rawtypes")
+public abstract class EdgeListVertex<I extends WritableComparable,
+        V extends Writable,
+        E extends Writable, M extends Writable>
+        extends MutableVertex<I, V, E, M> {
+    /** Class logger */
+    private static final Logger LOG = Logger.getLogger(EdgeListVertex.class);
+    /** Vertex id */
+    private I vertexId = null;
+    /** Vertex value */
+    private V vertexValue = null;
+    /** List of the dest edge indices */
+    private List<I> destEdgeIndexList;
+    /** List of the dest edge values */
+    /** Map of destination vertices and their edge values */
+    private List<E> destEdgeValueList;
+    /** List of incoming messages from the previous superstep */
+    private List<M> msgList;
+
+    @Override
+    public void initialize(I vertexId, V vertexValue,
+                           Map<I, E> edges,
+                           Iterable<M> messages) {
+        if (vertexId != null) {
+            setVertexId(vertexId);
+        }
+        if (vertexValue != null) {
+            setVertexValue(vertexValue);
+        }
+        if (edges != null && !edges.isEmpty()) {
+            destEdgeIndexList = Lists.newArrayListWithCapacity(edges.size());
+            destEdgeValueList = Lists.newArrayListWithCapacity(edges.size());
+            List<I> sortedIndexList = new ArrayList<I>(edges.keySet());
+            Collections.sort(sortedIndexList, new VertexIdComparator());
+            for (I index : sortedIndexList) {
+                destEdgeIndexList.add(index);
+                destEdgeValueList.add(edges.get(index));
+            }
+            sortedIndexList.clear();
+        } else {
+            destEdgeIndexList = Lists.newArrayListWithCapacity(0);
+            destEdgeValueList = Lists.newArrayListWithCapacity(0);
+        }
+        if (messages != null) {
+            msgList = Lists.newArrayListWithCapacity(Iterables.size(messages));
+            Iterables.<M>addAll(msgList, messages);
+        } else {
+            msgList = Lists.newArrayListWithCapacity(0);
+        }
+    }
+
+    @Override
+    public boolean equals(Object other) {
+        if (other instanceof EdgeListVertex) {
+            @SuppressWarnings("unchecked")
+            EdgeListVertex<I, V, E, M> otherVertex = (EdgeListVertex) other;
+            if (!getVertexId().equals(otherVertex.getVertexId())) {
+                return false;
+            }
+            if (!getVertexValue().equals(otherVertex.getVertexValue())) {
+                return false;
+            }
+            if (!ComparisonUtils.equal(getMessages(),
+                    otherVertex.getMessages())) {
+                return false;
+            }
+            return ComparisonUtils.equal(iterator(), otherVertex.iterator());
+        }
+        return false;
+    }
+
+    /**
+     * Comparator for the vertex id
+     */
+    private class VertexIdComparator implements Comparator<I> {
+        @SuppressWarnings("unchecked")
+        @Override
+        public int compare(I index1, I index2) {
+            return index1.compareTo(index2);
+        }
+    }
+
+    @Override
+    public final boolean addEdge(I targetVertexId, E edgeValue) {
+        System.out.println("addEdge: " + targetVertexId + " " + edgeValue + " " + destEdgeIndexList);
+        int pos = Collections.binarySearch(destEdgeIndexList,
+                                           targetVertexId,
+                                           new VertexIdComparator());
+        if (pos < 0) {
+            destEdgeIndexList.add(-1 * (pos + 1), targetVertexId);
+            destEdgeValueList.add(-1 * (pos + 1), edgeValue);
+            return true;
+        } else {
+            LOG.warn("addEdge: Vertex=" + vertexId +
+                     ": already added an edge value for dest vertex id " +
+                     targetVertexId);
+            return false;
+        }
+    }
+
+    @Override
+    public long getSuperstep() {
+        return getGraphState().getSuperstep();
+    }
+
+    @Override
+    public final void setVertexId(I vertexId) {
+        this.vertexId = vertexId;
+    }
+
+    @Override
+    public final I getVertexId() {
+        return vertexId;
+    }
+
+    @Override
+    public final V getVertexValue() {
+        return vertexValue;
+    }
+
+    @Override
+    public final void setVertexValue(V vertexValue) {
+        this.vertexValue = vertexValue;
+    }
+
+    @Override
+    public E getEdgeValue(I targetVertexId) {
+        int pos = Collections.binarySearch(destEdgeIndexList,
+                targetVertexId,
+                new VertexIdComparator());
+        if (pos < 0) {
+            return null;
+        } else {
+            return destEdgeValueList.get(pos);
+        }
+    }
+
+    @Override
+    public boolean hasEdge(I targetVertexId) {
+        int pos = Collections.binarySearch(destEdgeIndexList,
+                targetVertexId,
+                new VertexIdComparator());
+        if (pos < 0) {
+            return false;
+        } else {
+            return true;
+        }
+    }
+
+    /**
+     * Get an iterator to the edges on this vertex.
+     *
+     * @return A <em>sorted</em> iterator, as defined by the sort-order
+     *         of the vertex ids
+     */
+    @Override
+    public Iterator<I> iterator() {
+        return destEdgeIndexList.iterator();
+    }
+
+    @Override
+    public int getNumOutEdges() {
+        return destEdgeIndexList.size();
+    }
+
+    @Override
+    public E removeEdge(I targetVertexId) {
+        int pos = Collections.binarySearch(destEdgeIndexList,
+                targetVertexId,
+                new VertexIdComparator());
+        if (pos < 0) {
+            return null;
+        } else {
+            destEdgeIndexList.remove(pos);
+            return destEdgeValueList.remove(pos);
+        }
+    }
+
+    @Override
+    public final void sendMsgToAllEdges(M msg) {
+        if (msg == null) {
+            throw new IllegalArgumentException(
+                "sendMsgToAllEdges: Cannot send null message to all edges");
+        }
+        for (I index : destEdgeIndexList) {
+            sendMsg(index, msg);
+        }
+    }
+
+    @Override
+    final public void readFields(DataInput in) throws IOException {
+        vertexId = BspUtils.<I>createVertexIndex(getConf());
+        vertexId.readFields(in);
+        boolean hasVertexValue = in.readBoolean();
+        if (hasVertexValue) {
+            vertexValue = BspUtils.<V>createVertexValue(getConf());
+            vertexValue.readFields(in);
+        }
+        int edgeListCount = in.readInt();
+        destEdgeIndexList = Lists.newArrayListWithCapacity(edgeListCount);
+        destEdgeValueList = Lists.newArrayListWithCapacity(edgeListCount);
+        for (int i = 0; i < edgeListCount; ++i) {
+            I vertexId = BspUtils.<I>createVertexIndex(getConf());
+            E edgeValue = BspUtils.<E>createEdgeValue(getConf());
+            vertexId.readFields(in);
+            edgeValue.readFields(in);
+            destEdgeIndexList.add(vertexId);
+            destEdgeValueList.add(edgeValue);
+        }
+        int msgListSize = in.readInt();
+        msgList = Lists.newArrayListWithCapacity(msgListSize);
+        for (int i = 0; i < msgListSize; ++i) {
+            M msg = BspUtils.<M>createMessageValue(getConf());
+            msg.readFields(in);
+            msgList.add(msg);
+        }
+        halt = in.readBoolean();
+    }
+
+    @Override
+    final public void write(DataOutput out) throws IOException {
+        vertexId.write(out);
+        out.writeBoolean(vertexValue != null);
+        if (vertexValue != null) {
+            vertexValue.write(out);
+        }
+        out.writeInt(destEdgeIndexList.size());
+        for (int i = 0 ; i < destEdgeIndexList.size(); ++i) {
+            destEdgeIndexList.get(i).write(out);
+            destEdgeValueList.get(i).write(out);
+        }
+        out.writeInt(msgList.size());
+        for (M msg : msgList) {
+            msg.write(out);
+        }
+        out.writeBoolean(halt);
+    }
+
+    @Override
+    void putMessages(Iterable<M> messages) {
+        msgList.clear();
+        for (M message : messages) {
+            msgList.add(message);
+        }
+    }
+
+    @Override
+    public Iterable<M> getMessages() {
+        return Iterables.unmodifiableIterable(msgList);
+    }
+
+    @Override
+    void releaseResources() {
+        // Hint to GC to free the messages
+        msgList.clear();
+    }
+
+    @Override
+    public String toString() {
+        return "Vertex(id=" + getVertexId() + ",value=" + getVertexValue() +
+            ",#edges=" + getNumOutEdges() + ")";
+    }
+}
+
diff --git a/src/main/java/org/apache/giraph/graph/GiraphJob.java b/src/main/java/org/apache/giraph/graph/GiraphJob.java
new file mode 100644
index 0000000..6210715
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/GiraphJob.java
@@ -0,0 +1,592 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import org.apache.giraph.bsp.BspInputFormat;
+import org.apache.giraph.bsp.BspOutputFormat;
+import org.apache.giraph.graph.partition.GraphPartitionerFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ipc.Client;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.log4j.Logger;
+
+import java.io.IOException;
+
+/**
+ * Limits the functions that can be called by the user.  Job is too flexible
+ * for our needs.  For instance, our job should not have any reduce tasks.
+ */
+public class GiraphJob extends Job {
+    /** Vertex class - required */
+    public static final String VERTEX_CLASS = "giraph.vertexClass";
+    /** VertexInputFormat class - required */
+    public static final String VERTEX_INPUT_FORMAT_CLASS =
+        "giraph.vertexInputFormatClass";
+
+    /** VertexOutputFormat class - optional */
+    public static final String VERTEX_OUTPUT_FORMAT_CLASS =
+        "giraph.vertexOutputFormatClass";
+    /** Vertex combiner class - optional */
+    public static final String VERTEX_COMBINER_CLASS =
+        "giraph.combinerClass";
+    /** Vertex resolver class - optional */
+    public static final String VERTEX_RESOLVER_CLASS =
+        "giraph.vertexResolverClass";
+    /** Graph partitioner factory class - optional */
+    public static final String GRAPH_PARTITIONER_FACTORY_CLASS =
+        "giraph.graphPartitionerFactoryClass";
+
+    /** Vertex index class */
+    public static final String VERTEX_INDEX_CLASS = "giraph.vertexIndexClass";
+    /** Vertex value class */
+    public static final String VERTEX_VALUE_CLASS = "giraph.vertexValueClass";
+    /** Edge value class */
+    public static final String EDGE_VALUE_CLASS = "giraph.edgeValueClass";
+    /** Message value class */
+    public static final String MESSAGE_VALUE_CLASS = "giraph.messageValueClass";
+    /** Worker context class */
+    public static final String WORKER_CONTEXT_CLASS =
+    	"giraph.workerContextClass";
+    /** AggregatorWriter class - optional */
+    public static final String AGGREGATOR_WRITER_CLASS =
+    	"giraph.aggregatorWriterClass";
+
+    /**
+     * Minimum number of simultaneous workers before this job can run (int)
+     */
+    public static final String MIN_WORKERS = "giraph.minWorkers";
+    /**
+     * Maximum number of simultaneous worker tasks started by this job (int).
+     */
+    public static final String MAX_WORKERS = "giraph.maxWorkers";
+
+    /**
+     * Separate the workers and the master tasks.  This is required
+     * to support dynamic recovery. (boolean)
+     */
+    public static final String SPLIT_MASTER_WORKER =
+        "giraph.SplitMasterWorker";
+    /**
+     * Default on whether to separate the workers and the master tasks.
+     * Needs to be "true" to support dynamic recovery.
+     */
+    public static final boolean SPLIT_MASTER_WORKER_DEFAULT = true;
+
+    /** Indicates whether this job is run in an internal unit test */
+    public static final String LOCAL_TEST_MODE =
+        "giraph.localTestMode";
+
+    /** not in local test mode per default */
+    public static final boolean LOCAL_TEST_MODE_DEFAULT = false;
+
+    /**
+     * Minimum percent of the maximum number of workers that have responded
+     * in order to continue progressing. (float)
+     */
+    public static final String MIN_PERCENT_RESPONDED =
+        "giraph.minPercentResponded";
+    /** Default 100% response rate for workers */
+    public static final float MIN_PERCENT_RESPONDED_DEFAULT = 100.0f;
+
+    /** Polling timeout to check on the number of responded tasks (int) */
+    public static final String POLL_MSECS = "giraph.pollMsecs";
+    /** Default poll msecs (30 seconds) */
+    public static final int POLL_MSECS_DEFAULT = 30*1000;
+
+    /**
+     *  ZooKeeper comma-separated list (if not set,
+     *  will start up ZooKeeper locally)
+     */
+    public static final String ZOOKEEPER_LIST = "giraph.zkList";
+
+    /** ZooKeeper session millisecond timeout */
+    public static final String ZOOKEEPER_SESSION_TIMEOUT =
+        "giraph.zkSessionMsecTimeout";
+    /** Default Zookeeper session millisecond timeout */
+    public static final int ZOOKEEPER_SESSION_TIMEOUT_DEFAULT = 60*1000;
+
+    /** Polling interval to check for the final ZooKeeper server data */
+    public static final String ZOOKEEPER_SERVERLIST_POLL_MSECS =
+        "giraph.zkServerlistPollMsecs";
+    /** Default polling interval to check for the final ZooKeeper server data */
+    public static final int ZOOKEEPER_SERVERLIST_POLL_MSECS_DEFAULT =
+        3*1000;
+
+    /** Number of nodes (not tasks) to run Zookeeper on */
+    public static final String ZOOKEEPER_SERVER_COUNT =
+        "giraph.zkServerCount";
+    /** Default number of nodes to run Zookeeper on */
+    public static final int ZOOKEEPER_SERVER_COUNT_DEFAULT = 1;
+
+    /** ZooKeeper port to use */
+    public static final String ZOOKEEPER_SERVER_PORT =
+        "giraph.zkServerPort";
+    /** Default ZooKeeper port to use */
+    public static final int ZOOKEEPER_SERVER_PORT_DEFAULT = 22181;
+
+    /** Location of the ZooKeeper jar - Used internally, not meant for users */
+    public static final String ZOOKEEPER_JAR = "giraph.zkJar";
+
+    /** Local ZooKeeper directory to use */
+    public static final String ZOOKEEPER_DIR = "giraph.zkDir";
+
+    /** Initial port to start using for the RPC communication */
+    public static final String RPC_INITIAL_PORT = "giraph.rpcInitialPort";
+    /** Default port to start using for the RPC communication */
+    public static final int RPC_INITIAL_PORT_DEFAULT = 30000;
+
+    /** Maximum bind attempts for different RPC ports */
+    public static final String MAX_RPC_PORT_BIND_ATTEMPTS =
+        "giraph.maxRpcPortBindAttempts";
+    /** Default maximum bind attempts for different RPC ports */
+    public static final int MAX_RPC_PORT_BIND_ATTEMPTS_DEFAULT = 20;
+
+    /** Maximum number of RPC handlers */
+    public static final String RPC_NUM_HANDLERS = "giraph.rpcNumHandlers";
+    /** Default maximum number of RPC handlers */
+    public static final int RPC_NUM_HANDLERS_DEFAULT = 100;
+
+    /**
+     *  Maximum number of vertices per partition before sending.
+     *  (input superstep only).
+     */
+    public static final String MAX_VERTICES_PER_PARTITION =
+        "giraph.maxVerticesPerPartition";
+    /** Default maximum number of vertices per partition before sending. */
+    public static final int MAX_VERTICES_PER_PARTITION_DEFAULT = 100000;
+
+    /** Maximum number of messages per peer before flush */
+    public static final String MSG_SIZE = "giraph.msgSize";
+    /** Default maximum number of messages per peer before flush */
+    public static final int MSG_SIZE_DEFAULT = 1000;
+
+    /** Maximum number of messages that can be bulk sent during a flush */
+    public static final String MAX_MESSAGES_PER_FLUSH_PUT =
+        "giraph.maxMessagesPerFlushPut";
+    /** Default number of messages that can be bulk sent during a flush */
+    public static final int DEFAULT_MAX_MESSAGES_PER_FLUSH_PUT = 5000;
+
+    /** Number of flush threads per peer */
+    public static final String MSG_NUM_FLUSH_THREADS =
+        "giraph.msgNumFlushThreads";
+
+    /** Number of poll attempts prior to failing the job (int) */
+    public static final String POLL_ATTEMPTS = "giraph.pollAttempts";
+    /** Default poll attempts */
+    public static final int POLL_ATTEMPTS_DEFAULT = 10;
+
+    /** Number of minimum vertices in each vertex range */
+    public static final String MIN_VERTICES_PER_RANGE =
+        "giraph.minVerticesPerRange";
+    /** Default number of minimum vertices in each vertex range */
+    public static final long MIN_VERTICES_PER_RANGE_DEFAULT = 3;
+
+    /** Minimum stragglers of the superstep before printing them out */
+    public static final String PARTITION_LONG_TAIL_MIN_PRINT =
+        "giraph.partitionLongTailMinPrint";
+    /** Only print stragglers with one as a default */
+    public static final int PARTITION_LONG_TAIL_MIN_PRINT_DEFAULT = 1;
+
+    /** Use superstep counters? (boolean) */
+    public static final String USE_SUPERSTEP_COUNTERS =
+        "giraph.useSuperstepCounters";
+    /** Default is to use the superstep counters */
+    public static final boolean USE_SUPERSTEP_COUNTERS_DEFAULT = true;
+
+    /**
+     * Set the multiplicative factor of how many partitions to create from
+     * a single InputSplit based on the number of total InputSplits.  For
+     * example, if there are 10 total InputSplits and this is set to 0.5, then
+     * you will get 0.5 * 10 = 5 partitions for every InputSplit (given that the
+     * minimum size is met).
+     */
+    public static final String TOTAL_INPUT_SPLIT_MULTIPLIER =
+        "giraph.totalInputSplitMultiplier";
+    /** Default total input split multiplier */
+    public static final float TOTAL_INPUT_SPLIT_MULTIPLIER_DEFAULT = 0.5f;
+
+    /**
+     * Input split sample percent - Used only for sampling and testing, rather
+     * than an actual job.  The idea is that to test, you might only want a
+     * fraction of the actual input splits from your VertexInputFormat to
+     * load (values should be [0, 100]).
+     */
+    public static final String INPUT_SPLIT_SAMPLE_PERCENT =
+        "giraph.inputSplitSamplePercent";
+    /** Default is to use all the input splits */
+    public static final float INPUT_SPLIT_SAMPLE_PERCENT_DEFAULT = 100f;
+
+    /**
+     * To limit outlier input splits from producing too many vertices or to
+     * help with testing, the number of vertices loaded from an input split can
+     * be limited.  By default, everything is loaded.
+     */
+    public static final String INPUT_SPLIT_MAX_VERTICES =
+        "giraph.InputSplitMaxVertices";
+    /**
+     * Default is that all the vertices are to be loaded from the input
+     * split
+     */
+    public static final long INPUT_SPLIT_MAX_VERTICES_DEFAULT = -1;
+
+    /** Java opts passed to ZooKeeper startup */
+    public static final String ZOOKEEPER_JAVA_OPTS =
+        "giraph.zkJavaOpts";
+    /** Default java opts passed to ZooKeeper startup */
+    public static final String ZOOKEEPER_JAVA_OPTS_DEFAULT =
+        "-Xmx512m -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC " +
+        "-XX:CMSInitiatingOccupancyFraction=70 -XX:MaxGCPauseMillis=100";
+
+    /**
+     *  How often to checkpoint (i.e. 0, means no checkpoint,
+     *  1 means every superstep, 2 is every two supersteps, etc.).
+     */
+    public static final String CHECKPOINT_FREQUENCY =
+        "giraph.checkpointFrequency";
+
+    /** Default checkpointing frequency of every 2 supersteps. */
+    public static final int CHECKPOINT_FREQUENCY_DEFAULT = 2;
+
+    /**
+     * Delete checkpoints after a successful job run?
+     */
+    public static final String CLEANUP_CHECKPOINTS_AFTER_SUCCESS =
+        "giraph.cleanupCheckpointsAfterSuccess";
+    /** Default is to clean up the checkponts after a successful job */
+    public static final boolean CLEANUP_CHECKPOINTS_AFTER_SUCCESS_DEFAULT =
+        true;
+
+    /**
+     * An application can be restarted manually by selecting a superstep.  The
+     * corresponding checkpoint must exist for this to work.  The user should
+     * set a long value.  Default is start from scratch.
+     */
+    public static final String RESTART_SUPERSTEP = "giraph.restartSuperstep";
+
+    /**
+     * If ZOOKEEPER_LIST is not set, then use this directory to manage
+     * ZooKeeper
+     */
+    public static final String ZOOKEEPER_MANAGER_DIRECTORY =
+        "giraph.zkManagerDirectory";
+    /**
+     * Default ZooKeeper manager directory (where determining the servers in
+     * HDFS files will go).  Final directory path will also have job number
+     * for uniqueness.
+     */
+    public static final String ZOOKEEPER_MANAGER_DIR_DEFAULT =
+        "_bsp/_defaultZkManagerDir";
+
+    /** This directory has/stores the available checkpoint files in HDFS. */
+    public static final String CHECKPOINT_DIRECTORY =
+        "giraph.checkpointDirectory";
+    /**
+     * Default checkpoint directory (where checkpoing files go in HDFS).  Final
+     * directory path will also have the job number for uniqueness
+     */
+    public static final String CHECKPOINT_DIRECTORY_DEFAULT =
+        "_bsp/_checkpoints/";
+
+    /** Keep the zookeeper output for debugging? Default is to remove it. */
+    public static final String KEEP_ZOOKEEPER_DATA =
+        "giraph.keepZooKeeperData";
+    /** Default is to remove ZooKeeper data. */
+    public static final Boolean KEEP_ZOOKEEPER_DATA_DEFAULT = false;
+
+    /** Default ZooKeeper tick time. */
+    public static final int DEFAULT_ZOOKEEPER_TICK_TIME = 6000;
+    /** Default ZooKeeper init limit (in ticks). */
+    public static final int DEFAULT_ZOOKEEPER_INIT_LIMIT = 10;
+    /** Default ZooKeeper sync limit (in ticks). */
+    public static final int DEFAULT_ZOOKEEPER_SYNC_LIMIT = 5;
+    /** Default ZooKeeper snap count. */
+    public static final int DEFAULT_ZOOKEEPER_SNAP_COUNT = 50000;
+    /** Default ZooKeeper maximum client connections. */
+    public static final int DEFAULT_ZOOKEEPER_MAX_CLIENT_CNXNS = 10000;
+    /** Default ZooKeeper minimum session timeout of 5 minutes (in msecs). */
+    public static final int DEFAULT_ZOOKEEPER_MIN_SESSION_TIMEOUT = 300*1000;
+    /** Default ZooKeeper maximum session timeout of 10 minutes (in msecs). */
+    public static final int DEFAULT_ZOOKEEPER_MAX_SESSION_TIMEOUT = 600*1000;
+
+    /** Class logger */
+    private static final Logger LOG = Logger.getLogger(GiraphJob.class);
+
+    /**
+     * Constructor that will instantiate the configuration
+     *
+     * @param jobName User-defined job name
+     * @throws IOException
+     */
+    public GiraphJob(String jobName) throws IOException {
+        super(new Configuration(), jobName);
+    }
+
+    /**
+     * Constructor.
+     *
+     * @param conf User-defined configuration
+     * @param jobName User-defined job name
+     * @throws IOException
+     */
+    public GiraphJob(Configuration conf, String jobName) throws IOException {
+        super(conf, jobName);
+    }
+
+    /**
+     * Make sure the configuration is set properly by the user prior to
+     * submitting the job.
+     */
+    private void checkConfiguration() {
+        if (conf.getInt(MAX_WORKERS, -1) < 0) {
+            throw new RuntimeException("No valid " + MAX_WORKERS);
+        }
+        if (conf.getFloat(MIN_PERCENT_RESPONDED,
+                          MIN_PERCENT_RESPONDED_DEFAULT) <= 0.0f ||
+                conf.getFloat(MIN_PERCENT_RESPONDED,
+                              MIN_PERCENT_RESPONDED_DEFAULT) > 100.0f) {
+            throw new IllegalArgumentException(
+                "Invalid " +
+                conf.getFloat(MIN_PERCENT_RESPONDED,
+                              MIN_PERCENT_RESPONDED_DEFAULT) + " for " +
+                MIN_PERCENT_RESPONDED);
+        }
+        if (conf.getInt(MIN_WORKERS, -1) < 0) {
+            throw new IllegalArgumentException("No valid " + MIN_WORKERS);
+        }
+        if (BspUtils.getVertexClass(getConfiguration()) == null) {
+            throw new IllegalArgumentException("GiraphJob: Null VERTEX_CLASS");
+        }
+        if (BspUtils.getVertexInputFormatClass(getConfiguration()) == null) {
+            throw new IllegalArgumentException(
+                "GiraphJob: Null VERTEX_INPUT_FORMAT_CLASS");
+        }
+        if (BspUtils.getVertexResolverClass(getConfiguration()) == null) {
+            setVertexResolverClass(VertexResolver.class);
+            if (LOG.isInfoEnabled()) {
+                LOG.info("GiraphJob: No class found for " +
+                         VERTEX_RESOLVER_CLASS + ", defaulting to " +
+                         VertexResolver.class.getCanonicalName());
+            }
+        }
+    }
+
+    /**
+     * Set the vertex class (required)
+     *
+     * @param vertexClass Runs vertex computation
+     */
+    final public void setVertexClass(Class<?> vertexClass) {
+        getConfiguration().setClass(VERTEX_CLASS, vertexClass, BasicVertex.class);
+    }
+
+    /**
+     * Set the vertex input format class (required)
+     *
+     * @param vertexInputFormatClass Determines how graph is input
+     */
+    final public void setVertexInputFormatClass(
+            Class<?> vertexInputFormatClass) {
+        getConfiguration().setClass(VERTEX_INPUT_FORMAT_CLASS,
+                                    vertexInputFormatClass,
+                                    VertexInputFormat.class);
+    }
+
+    /**
+     * Set the vertex output format class (optional)
+     *
+     * @param vertexOutputFormatClass Determines how graph is output
+     */
+    final public void setVertexOutputFormatClass(
+            Class<?> vertexOutputFormatClass) {
+        getConfiguration().setClass(VERTEX_OUTPUT_FORMAT_CLASS,
+                                    vertexOutputFormatClass,
+                                    VertexOutputFormat.class);
+    }
+
+    /**
+     * Set the vertex combiner class (optional)
+     *
+     * @param vertexCombinerClass Determines how vertex messages are combined
+     */
+    final public void setVertexCombinerClass(Class<?> vertexCombinerClass) {
+        getConfiguration().setClass(VERTEX_COMBINER_CLASS,
+                                    vertexCombinerClass,
+                                    VertexCombiner.class);
+    }
+
+    /**
+     * Set the graph partitioner class (optional)
+     *
+     * @param graphPartitionerClass Determines how the graph is partitioned
+     */
+    final public void setGraphPartitionerFactoryClass(
+            Class<?> graphPartitionerFactoryClass) {
+        getConfiguration().setClass(GRAPH_PARTITIONER_FACTORY_CLASS,
+                                    graphPartitionerFactoryClass,
+                                    GraphPartitionerFactory.class);
+    }
+
+    /**
+     * Set the vertex resolver class (optional)
+     *
+     * @param vertexResolverClass Determines how vertex mutations are resolved
+     */
+    final public void setVertexResolverClass(Class<?> vertexResolverClass) {
+        getConfiguration().setClass(VERTEX_RESOLVER_CLASS,
+                                    vertexResolverClass,
+                                    VertexResolver.class);
+    }
+
+   /**
+    * Set the worker context class (optional)
+    *
+    * @param workerContextClass Determines what code is executed on a each
+    *        worker before and after each superstep and computation
+    */
+    final public void setWorkerContextClass(Class<?> workerContextClass) {
+        getConfiguration().setClass(WORKER_CONTEXT_CLASS,
+                                    workerContextClass,
+                                    WorkerContext.class);
+    }
+
+    /**
+     * Set the aggregator writer class (optional)
+     *
+     * @param aggregatorWriterClass Determines how the aggregators are
+     * 	      written to file at the end of the job
+     */
+     final public void setAggregatorWriterClass(
+    		 Class<?> aggregatorWriterClass) {
+         getConfiguration().setClass(AGGREGATOR_WRITER_CLASS,
+                                     aggregatorWriterClass,
+                                     AggregatorWriter.class);
+     }
+
+    /**
+     * Set worker configuration for determining what is required for
+     * a superstep.
+     *
+     * @param minWorkers Minimum workers to do a superstep
+     * @param maxWorkers Maximum workers to do a superstep
+     *        (max map tasks in job)
+     * @param minPercentResponded 0 - 100 % of the workers required to
+     *        have responded before continuing the superstep
+     */
+    final public void setWorkerConfiguration(int minWorkers,
+                                             int maxWorkers,
+                                             float minPercentResponded) {
+        conf.setInt(MIN_WORKERS, minWorkers);
+        conf.setInt(MAX_WORKERS, maxWorkers);
+        conf.setFloat(MIN_PERCENT_RESPONDED, minPercentResponded);
+    }
+
+    /**
+     * Utilize an existing ZooKeeper service.  If this is not set, ZooKeeper
+     * will be dynamically started by Giraph for this job.
+     *
+     * @param serverList Comma separated list of servers and ports
+     *        (i.e. zk1:2221,zk2:2221)
+     */
+    final public void setZooKeeperConfiguration(String serverList) {
+        conf.set(ZOOKEEPER_LIST, serverList);
+    }
+
+    /**
+     * Check if the configuration is local.  If it is local, do additional
+     * checks due to the restrictions of LocalJobRunner.
+     *
+     * @param conf Configuration
+     */
+    private static void checkLocalJobRunnerConfiguration(
+            Configuration conf) {
+        String jobTracker = conf.get("mapred.job.tracker", null);
+        if (!jobTracker.equals("local")) {
+            // Nothing to check
+            return;
+        }
+
+        int maxWorkers = conf.getInt(MAX_WORKERS, -1);
+        if (maxWorkers != 1) {
+            throw new IllegalArgumentException(
+                "checkLocalJobRunnerConfiguration: When using " +
+                "LocalJobRunner, must have only one worker since " +
+                "only 1 task at a time!");
+        }
+        if (conf.getBoolean(SPLIT_MASTER_WORKER,
+                            SPLIT_MASTER_WORKER_DEFAULT)) {
+            throw new IllegalArgumentException(
+                "checkLocalJobRunnerConfiguration: When using " +
+                "LocalJobRunner, you cannot run in split master / worker " +
+                "mode since there is only 1 task at a time!");
+        }
+    }
+
+    /**
+     * Check whether a specified int conf value is set and if not, set it.
+     *
+     * @param param Conf value to check
+     * @param defaultValue Assign to value if not set
+     */
+    private void setIntConfIfDefault(String param, int defaultValue) {
+        if (conf.getInt(param, Integer.MIN_VALUE) == Integer.MIN_VALUE) {
+            conf.setInt(param, defaultValue);
+        }
+    }
+
+    /**
+     * Runs the actual graph application through Hadoop Map-Reduce.
+     *
+     * @param verbose If true, provide verbose output, false otherwise
+     * @throws ClassNotFoundException
+     * @throws InterruptedException
+     * @throws IOException
+     */
+    final public boolean run(boolean verbose)
+            throws IOException, InterruptedException, ClassNotFoundException {
+        checkConfiguration();
+        checkLocalJobRunnerConfiguration(conf);
+        setNumReduceTasks(0);
+        // Most users won't hit this hopefully and can set it higher if desired
+        setIntConfIfDefault("mapreduce.job.counters.limit", 512);
+
+        // Capacity scheduler-specific settings.  These should be enough for
+        // a reasonable Giraph job
+        setIntConfIfDefault("mapred.job.map.memory.mb", 1024);
+        setIntConfIfDefault("mapred.job.reduce.memory.mb", 1024);
+
+        // Speculative execution doesn't make sense for Giraph
+        conf.setBoolean("mapred.map.tasks.speculative.execution", false);
+
+        // Set the ping interval to 5 minutes instead of one minute
+        // (DEFAULT_PING_INTERVAL)
+        Client.setPingInterval(conf, 60000*5);
+
+        if (getJar() == null) {
+            setJarByClass(GiraphJob.class);
+        }
+        // Should work in MAPREDUCE-1938 to let the user jars/classes
+        // get loaded first
+        conf.setBoolean("mapreduce.user.classpath.first", true);
+
+        setMapperClass(GraphMapper.class);
+        setInputFormatClass(BspInputFormat.class);
+        setOutputFormatClass(BspOutputFormat.class);
+        return waitForCompletion(verbose);
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/GlobalStats.java b/src/main/java/org/apache/giraph/graph/GlobalStats.java
new file mode 100644
index 0000000..db0389d
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/GlobalStats.java
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.giraph.graph.partition.PartitionStats;
+import org.apache.hadoop.io.Writable;
+
+/**
+ * Aggregated stats by the master.
+ */
+public class GlobalStats implements Writable {
+    private long vertexCount = 0;
+    private long finishedVertexCount = 0;
+    private long edgeCount = 0;
+    private long messageCount = 0;
+
+    public void addPartitionStats(PartitionStats partitionStats) {
+        this.vertexCount += partitionStats.getVertexCount();
+        this.finishedVertexCount += partitionStats.getFinishedVertexCount();
+        this.edgeCount += partitionStats.getEdgeCount();
+    }
+
+    public long getVertexCount() {
+        return vertexCount;
+    }
+
+    public long getFinishedVertexCount() {
+        return finishedVertexCount;
+    }
+
+    public long getEdgeCount() {
+        return edgeCount;
+    }
+
+    public long getMessageCount() {
+        return messageCount;
+    }
+
+    public void addMessageCount(long messageCount) {
+        this.messageCount += messageCount;
+    }
+
+    @Override
+    public void readFields(DataInput input) throws IOException {
+        vertexCount = input.readLong();
+        finishedVertexCount = input.readLong();
+        edgeCount = input.readLong();
+        messageCount = input.readLong();
+    }
+
+    @Override
+    public void write(DataOutput output) throws IOException {
+        output.writeLong(vertexCount);
+        output.writeLong(finishedVertexCount);
+        output.writeLong(edgeCount);
+        output.writeLong(messageCount);
+    }
+
+    @Override
+    public String toString() {
+        return "(vtx=" + vertexCount + ",finVtx=" +
+               finishedVertexCount + ",edges=" + edgeCount + ",msgCount=" +
+               messageCount + ")";
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/GraphMapper.java b/src/main/java/org/apache/giraph/graph/GraphMapper.java
new file mode 100644
index 0000000..2f28ee2
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/GraphMapper.java
@@ -0,0 +1,645 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import com.google.common.collect.Iterables;
+import org.apache.giraph.bsp.CentralizedServiceWorker;
+import org.apache.giraph.graph.partition.Partition;
+import org.apache.giraph.graph.partition.PartitionOwner;
+import org.apache.giraph.graph.partition.PartitionStats;
+import org.apache.giraph.utils.MemoryUtils;
+import org.apache.giraph.utils.ReflectionUtils;
+import org.apache.giraph.zk.ZooKeeperManager;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.filecache.DistributedCache;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.log4j.Logger;
+
+import java.io.IOException;
+import java.lang.reflect.Type;
+import java.net.URL;
+import java.net.URLDecoder;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Enumeration;
+import java.util.Iterator;
+import java.util.List;
+
+/**
+ * This mapper that will execute the BSP graph tasks.  Since this mapper will
+ * not be passing data by key-value pairs through the MR framework, the
+ * types are irrelevant.
+ */
+@SuppressWarnings("rawtypes")
+public class GraphMapper<I extends WritableComparable, V extends Writable,
+        E extends Writable, M extends Writable> extends
+        Mapper<Object, Object, Object, Object> {
+    /** Class logger */
+    private static final Logger LOG = Logger.getLogger(GraphMapper.class);
+    /** Coordination service worker */
+    CentralizedServiceWorker<I, V, E, M> serviceWorker;
+    /** Coordination service master thread */
+    Thread masterThread = null;
+    /** The map should be run exactly once, or else there is a problem. */
+    boolean mapAlreadyRun = false;
+    /** Manages the ZooKeeper servers if necessary (dynamic startup) */
+    private ZooKeeperManager zkManager;
+    /** Configuration */
+    private Configuration conf;
+    /** Already complete? */
+    private boolean done = false;
+    /** What kind of functions is this mapper doing? */
+    private MapFunctions mapFunctions = MapFunctions.UNKNOWN;
+    /**
+     * Graph state for all vertices that is used for the duration of
+     * this mapper.
+     */
+    private GraphState<I,V,E,M> graphState = new GraphState<I, V, E, M>();
+
+    /** What kinds of functions to run on this mapper */
+    public enum MapFunctions {
+        UNKNOWN,
+        MASTER_ONLY,
+        MASTER_ZOOKEEPER_ONLY,
+        WORKER_ONLY,
+        ALL,
+        ALL_EXCEPT_ZOOKEEPER
+    }
+
+    /**
+     * Get the map function enum
+     */
+    public MapFunctions getMapFunctions() {
+        return mapFunctions;
+    }
+
+    /**
+     * Get the aggregator usage, a subset of the functionality
+     *
+     * @return Aggregator usage interface
+     */
+    public final AggregatorUsage getAggregatorUsage() {
+        return serviceWorker;
+    }
+
+    public final WorkerContext getWorkerContext() {
+    	return serviceWorker.getWorkerContext();
+    }
+
+    public final GraphState<I,V,E,M> getGraphState() {
+      return graphState;
+    }
+
+    /**
+     * Default handler for uncaught exceptions.
+     */
+    class OverrideExceptionHandler
+            implements Thread.UncaughtExceptionHandler {
+        public void uncaughtException(Thread t, Throwable e) {
+            LOG.fatal(
+                "uncaughtException: OverrideExceptionHandler on thread " +
+                t.getName() + ", msg = " +  e.getMessage() +
+                ", exiting...", e);
+            System.exit(1);
+        }
+    }
+
+    /**
+     * Copied from JobConf to get the location of this jar.  Workaround for
+     * things like Oozie map-reduce jobs.
+     *
+     * @param my_class Class to search the class loader path for to locate
+     *        the relevant jar file
+     * @return Location of the jar file containing my_class
+     */
+    private static String findContainingJar(Class<?> my_class) {
+        ClassLoader loader = my_class.getClassLoader();
+        String class_file =
+            my_class.getName().replaceAll("\\.", "/") + ".class";
+        try {
+            for(Enumeration<?> itr = loader.getResources(class_file);
+            itr.hasMoreElements();) {
+                URL url = (URL) itr.nextElement();
+                if ("jar".equals(url.getProtocol())) {
+                    String toReturn = url.getPath();
+                    if (toReturn.startsWith("file:")) {
+                        toReturn = toReturn.substring("file:".length());
+                    }
+                    toReturn = URLDecoder.decode(toReturn, "UTF-8");
+                    return toReturn.replaceAll("!.*$", "");
+                }
+            }
+        } catch (IOException e) {
+            throw new RuntimeException(e);
+        }
+        return null;
+    }
+
+    /**
+     * Make sure that all registered classes have matching types.  This
+     * is a little tricky due to type erasure, cannot simply get them from
+     * the class type arguments.  Also, set the vertex index, vertex value,
+     * edge value and message value classes.
+     *
+     * @param conf Configuration to get the various classes
+     */
+    public void determineClassTypes(Configuration conf) {
+        Class<? extends BasicVertex<I, V, E, M>> vertexClass =
+            BspUtils.<I, V, E, M>getVertexClass(conf);
+        List<Class<?>> classList = ReflectionUtils.<BasicVertex>getTypeArguments(
+            BasicVertex.class, vertexClass);
+        Type vertexIndexType = classList.get(0);
+        Type vertexValueType = classList.get(1);
+        Type edgeValueType = classList.get(2);
+        Type messageValueType = classList.get(3);
+
+        Class<? extends VertexInputFormat<I, V, E, M>> vertexInputFormatClass =
+            BspUtils.<I, V, E, M>getVertexInputFormatClass(conf);
+        classList = ReflectionUtils.<VertexInputFormat>getTypeArguments(
+            VertexInputFormat.class, vertexInputFormatClass);
+        if (classList.get(0) == null) {
+            LOG.warn("Input format vertex index type is not known");
+        } else if (!vertexIndexType.equals(classList.get(0))) {
+            throw new IllegalArgumentException(
+                "checkClassTypes: Vertex index types don't match, " +
+                "vertex - " + vertexIndexType +
+                ", vertex input format - " + classList.get(0));
+        }
+        if (classList.get(1) == null) {
+            LOG.warn("Input format vertex value type is not known");
+        } else if (!vertexValueType.equals(classList.get(1))) {
+            throw new IllegalArgumentException(
+                "checkClassTypes: Vertex value types don't match, " +
+                "vertex - " + vertexValueType +
+                ", vertex input format - " + classList.get(1));
+        }
+        if (classList.get(2) == null) {
+            LOG.warn("Input format edge value type is not known");
+        } else if (!edgeValueType.equals(classList.get(2))) {
+            throw new IllegalArgumentException(
+                "checkClassTypes: Edge value types don't match, " +
+                "vertex - " + edgeValueType +
+                ", vertex input format - " + classList.get(2));
+        }
+        // If has vertex combiner class, check
+        Class<? extends VertexCombiner<I, M>> vertexCombinerClass =
+            BspUtils.<I, M>getVertexCombinerClass(conf);
+        if (vertexCombinerClass != null) {
+            classList = ReflectionUtils.<VertexCombiner>getTypeArguments(
+                VertexCombiner.class, vertexCombinerClass);
+            if (!vertexIndexType.equals(classList.get(0))) {
+                throw new IllegalArgumentException(
+                    "checkClassTypes: Vertex index types don't match, " +
+                    "vertex - " + vertexIndexType +
+                    ", vertex combiner - " + classList.get(0));
+            }
+            if (!messageValueType.equals(classList.get(1))) {
+                throw new IllegalArgumentException(
+                    "checkClassTypes: Message value types don't match, " +
+                    "vertex - " + vertexValueType +
+                    ", vertex combiner - " + classList.get(1));
+            }
+        }
+        // If has vertex output format class, check
+        Class<? extends VertexOutputFormat<I, V, E>>
+            vertexOutputFormatClass =
+                BspUtils.<I, V, E>getVertexOutputFormatClass(conf);
+        if (vertexOutputFormatClass != null) {
+            classList =
+                ReflectionUtils.<VertexOutputFormat>getTypeArguments(
+                    VertexOutputFormat.class, vertexOutputFormatClass);
+            if (classList.get(0) == null) {
+                LOG.warn("Output format vertex index type is not known");
+            } else if (!vertexIndexType.equals(classList.get(0))) {
+                throw new IllegalArgumentException(
+                    "checkClassTypes: Vertex index types don't match, " +
+                    "vertex - " + vertexIndexType +
+                    ", vertex output format - " + classList.get(0));
+            }
+            if (classList.get(1) == null) {
+                LOG.warn("Output format vertex value type is not known");
+            } else if (!vertexValueType.equals(classList.get(1))) {
+                throw new IllegalArgumentException(
+                    "checkClassTypes: Vertex value types don't match, " +
+                    "vertex - " + vertexValueType +
+                    ", vertex output format - " + classList.get(1));
+            } if (classList.get(2) == null) {
+                LOG.warn("Output format edge value type is not known");
+            } else if (!edgeValueType.equals(classList.get(2))) {
+                throw new IllegalArgumentException(
+                    "checkClassTypes: Edge value types don't match, " +
+                    "vertex - " + vertexIndexType +
+                    ", vertex output format - " + classList.get(2));
+            }
+        }
+        // Vertex resolver might never select the types
+        Class<? extends VertexResolver<I, V, E, M>>
+            vertexResolverClass =
+                BspUtils.<I, V, E, M>getVertexResolverClass(conf);
+        classList = ReflectionUtils.<VertexResolver>getTypeArguments(
+            VertexResolver.class, vertexResolverClass);
+        if (classList.get(0) != null &&
+                !vertexIndexType.equals(classList.get(0))) {
+            throw new IllegalArgumentException(
+                "checkClassTypes: Vertex index types don't match, " +
+                "vertex - " + vertexIndexType +
+                ", vertex resolver - " + classList.get(0));
+        }
+        if (classList.get(1) != null &&
+                !vertexValueType.equals(classList.get(1))) {
+            throw new IllegalArgumentException(
+                "checkClassTypes: Vertex value types don't match, " +
+                "vertex - " + vertexValueType +
+                ", vertex resolver - " + classList.get(1));
+        }
+        if (classList.get(2) != null &&
+                !edgeValueType.equals(classList.get(2))) {
+            throw new IllegalArgumentException(
+                "checkClassTypes: Edge value types don't match, " +
+                "vertex - " + edgeValueType +
+                ", vertex resolver - " + classList.get(2));
+        }
+        if (classList.get(3) != null &&
+                !messageValueType.equals(classList.get(3))) {
+            throw new IllegalArgumentException(
+                "checkClassTypes: Message value types don't match, " +
+                "vertex - " + edgeValueType +
+                ", vertex resolver - " + classList.get(3));
+        }
+        conf.setClass(GiraphJob.VERTEX_INDEX_CLASS,
+                      (Class<?>) vertexIndexType,
+                      WritableComparable.class);
+        conf.setClass(GiraphJob.VERTEX_VALUE_CLASS,
+                      (Class<?>) vertexValueType,
+                      Writable.class);
+        conf.setClass(GiraphJob.EDGE_VALUE_CLASS,
+                      (Class<?>) edgeValueType,
+                      Writable.class);
+        conf.setClass(GiraphJob.MESSAGE_VALUE_CLASS,
+                      (Class<?>) messageValueType,
+                      Writable.class);
+    }
+
+    /**
+     * Figure out what functions this mapper should do.  Basic logic is as
+     * follows:
+     * 1) If not split master, everyone does the everything and/or running
+     *    ZooKeeper.
+     * 2) If split master/worker, masters also run ZooKeeper (if it's not
+     *    given to us).
+     *
+     * @param conf Configuration to use
+     * @return Functions that this mapper should do.
+     */
+    private static MapFunctions determineMapFunctions(
+            Configuration conf,
+            ZooKeeperManager zkManager) {
+        boolean splitMasterWorker =
+            conf.getBoolean(GiraphJob.SPLIT_MASTER_WORKER,
+                            GiraphJob.SPLIT_MASTER_WORKER_DEFAULT);
+        int taskPartition = conf.getInt("mapred.task.partition", -1);
+        boolean zkAlreadyProvided =
+            conf.get(GiraphJob.ZOOKEEPER_LIST) != null;
+        MapFunctions functions = MapFunctions.UNKNOWN;
+        // What functions should this mapper do?
+        if (!splitMasterWorker) {
+            if ((zkManager != null) && zkManager.runsZooKeeper()) {
+                functions = MapFunctions.ALL;
+            } else {
+                functions = MapFunctions.ALL_EXCEPT_ZOOKEEPER;
+            }
+        } else {
+            if (zkAlreadyProvided) {
+                int masterCount =
+                    conf.getInt(GiraphJob.ZOOKEEPER_SERVER_COUNT,
+                                GiraphJob.ZOOKEEPER_SERVER_COUNT_DEFAULT);
+                if (taskPartition < masterCount) {
+                    functions = MapFunctions.MASTER_ONLY;
+                } else {
+                    functions = MapFunctions.WORKER_ONLY;
+                }
+            } else {
+                if ((zkManager != null) && zkManager.runsZooKeeper()) {
+                    functions = MapFunctions.MASTER_ZOOKEEPER_ONLY;
+                } else {
+                    functions = MapFunctions.WORKER_ONLY;
+                }
+            }
+        }
+        return functions;
+    }
+
+    @Override
+    public void setup(Context context)
+            throws IOException, InterruptedException {
+        context.setStatus("setup: Beginning mapper setup.");
+        graphState.setContext(context);
+        // Setting the default handler for uncaught exceptions.
+        Thread.setDefaultUncaughtExceptionHandler(
+            new OverrideExceptionHandler());
+        conf = context.getConfiguration();
+        // Hadoop security needs this property to be set
+        if (System.getenv("HADOOP_TOKEN_FILE_LOCATION") != null) {
+            conf.set("mapreduce.job.credentials.binary",
+                    System.getenv("HADOOP_TOKEN_FILE_LOCATION"));
+        }
+        // Ensure the user classes have matching types and figure them out
+        determineClassTypes(conf);
+
+        // Do some initial setup (possibly starting up a Zookeeper service)
+        context.setStatus("setup: Initializing Zookeeper services.");
+        if (!conf.getBoolean(GiraphJob.LOCAL_TEST_MODE,
+                GiraphJob.LOCAL_TEST_MODE_DEFAULT)) {
+            Path[] fileClassPaths = DistributedCache.getLocalCacheArchives(conf);
+            String zkClasspath = null;
+            if(fileClassPaths == null) {
+                if(LOG.isInfoEnabled()) {
+                    LOG.info("Distributed cache is empty. Assuming fatjar.");
+                }
+                String jarFile = context.getJar();
+                if (jarFile == null) {
+                   jarFile = findContainingJar(getClass());
+                }
+                zkClasspath = jarFile.replaceFirst("file:", "");
+            } else {
+                StringBuilder sb = new StringBuilder();
+                sb.append(fileClassPaths[0]);
+
+                for (int i = 1; i < fileClassPaths.length; i++) {
+                    sb.append(":");
+                    sb.append(fileClassPaths[i]);
+                }
+                zkClasspath = sb.toString();
+            }
+
+            if (LOG.isInfoEnabled()) {
+                LOG.info("setup: classpath @ " + zkClasspath);
+            }
+            conf.set(GiraphJob.ZOOKEEPER_JAR, zkClasspath);
+        }
+        String serverPortList =
+            conf.get(GiraphJob.ZOOKEEPER_LIST, "");
+        if (serverPortList == "") {
+            zkManager = new ZooKeeperManager(context);
+            context.setStatus("setup: Setting up Zookeeper manager.");
+            zkManager.setup();
+            if (zkManager.computationDone()) {
+                done = true;
+                return;
+            }
+            zkManager.onlineZooKeeperServers();
+            serverPortList = zkManager.getZooKeeperServerPortString();
+        }
+        context.setStatus("setup: Connected to Zookeeper service " +
+                          serverPortList);
+        this.mapFunctions = determineMapFunctions(conf, zkManager);
+
+        // Sometimes it takes a while to get multiple ZooKeeper servers up
+        if (conf.getInt(GiraphJob.ZOOKEEPER_SERVER_COUNT,
+                    GiraphJob.ZOOKEEPER_SERVER_COUNT_DEFAULT) > 1) {
+            Thread.sleep(GiraphJob.DEFAULT_ZOOKEEPER_INIT_LIMIT *
+                         GiraphJob.DEFAULT_ZOOKEEPER_TICK_TIME);
+        }
+        int sessionMsecTimeout =
+            conf.getInt(GiraphJob.ZOOKEEPER_SESSION_TIMEOUT,
+                          GiraphJob.ZOOKEEPER_SESSION_TIMEOUT_DEFAULT);
+        try {
+            if ((mapFunctions == MapFunctions.MASTER_ZOOKEEPER_ONLY) ||
+                    (mapFunctions == MapFunctions.MASTER_ONLY) ||
+                    (mapFunctions == MapFunctions.ALL) ||
+                    (mapFunctions == MapFunctions.ALL_EXCEPT_ZOOKEEPER)) {
+                if (LOG.isInfoEnabled()) {
+                    LOG.info("setup: Starting up BspServiceMaster " +
+                             "(master thread)...");
+                }
+                masterThread =
+                    new MasterThread<I, V, E, M>(
+                        new BspServiceMaster<I, V, E, M>(serverPortList,
+                                                         sessionMsecTimeout,
+                                                         context,
+                                                         this),
+                                                         context);
+                masterThread.start();
+            }
+            if ((mapFunctions == MapFunctions.WORKER_ONLY) ||
+                    (mapFunctions == MapFunctions.ALL) ||
+                    (mapFunctions == MapFunctions.ALL_EXCEPT_ZOOKEEPER)) {
+                if (LOG.isInfoEnabled()) {
+                    LOG.info("setup: Starting up BspServiceWorker...");
+                }
+                serviceWorker = new BspServiceWorker<I, V, E, M>(
+                    serverPortList,
+                    sessionMsecTimeout,
+                    context,
+                    this,
+                    graphState);
+                if (LOG.isInfoEnabled()) {
+                    LOG.info("setup: Registering health of this worker...");
+                }
+                serviceWorker.setup();
+            }
+        } catch (Exception e) {
+            LOG.error("setup: Caught exception just before end of setup", e);
+            if (zkManager != null ) {
+                zkManager.offlineZooKeeperServers(
+                ZooKeeperManager.State.FAILED);
+            }
+            throw new RuntimeException(
+                "setup: Offlining servers due to exception...", e);
+        }
+        context.setStatus(getMapFunctions().toString() + " starting...");
+    }
+
+    @Override
+    public void map(Object key, Object value, Context context)
+        throws IOException, InterruptedException {
+        // map() only does computation
+        // 1) Run checkpoint per frequency policy.
+        // 2) For every vertex on this mapper, run the compute() function
+        // 3) Wait until all messaging is done.
+        // 4) Check if all vertices are done.  If not goto 2).
+        // 5) Dump output.
+        if (done == true) {
+            return;
+        }
+        if ((serviceWorker != null) && (graphState.getNumVertices() == 0)) {
+            return;
+        }
+
+        if ((mapFunctions == MapFunctions.MASTER_ZOOKEEPER_ONLY) ||
+                (mapFunctions == MapFunctions.MASTER_ONLY)) {
+            if (LOG.isInfoEnabled()) {
+                LOG.info("map: No need to do anything when not a worker");
+            }
+            return;
+        }
+
+        if (mapAlreadyRun) {
+            throw new RuntimeException("In BSP, map should have only been" +
+                                       " run exactly once, (already run)");
+        }
+        mapAlreadyRun = true;
+
+        graphState.setSuperstep(serviceWorker.getSuperstep()).
+            setContext(context).setGraphMapper(this);
+
+        try {
+            serviceWorker.getWorkerContext().preApplication();
+        } catch (InstantiationException e) {
+            LOG.fatal("map: preApplication failed in instantiation", e);
+            throw new RuntimeException(
+                "map: preApplication failed in instantiation", e);
+        } catch (IllegalAccessException e) {
+            LOG.fatal("map: preApplication failed in access", e);
+            throw new RuntimeException(
+                "map: preApplication failed in access",e );
+        }
+        context.progress();
+
+        List<PartitionStats> partitionStatsList =
+            new ArrayList<PartitionStats>();
+        do {
+            long superstep = serviceWorker.getSuperstep();
+
+            graphState.setSuperstep(superstep);
+
+            Collection<? extends PartitionOwner> masterAssignedPartitionOwners =
+                serviceWorker.startSuperstep();
+            if (zkManager != null && zkManager.runsZooKeeper()) {
+                if (LOG.isInfoEnabled()) {
+                    LOG.info("map: Chosen to run ZooKeeper...");
+                }
+                context.setStatus("map: Running Zookeeper Server");
+            }
+
+            if (LOG.isDebugEnabled()) {
+                LOG.debug("map: " + MemoryUtils.getRuntimeMemoryStats());
+            }
+            context.progress();
+
+            serviceWorker.exchangeVertexPartitions(
+                masterAssignedPartitionOwners);
+            context.progress();
+
+            // Might need to restart from another superstep
+            // (manually or automatic), or store a checkpoint
+            if (serviceWorker.getRestartedSuperstep() == superstep) {
+                if (LOG.isInfoEnabled()) {
+                    LOG.info("map: Loading from checkpoint " + superstep);
+                }
+                serviceWorker.loadCheckpoint(
+                    serviceWorker.getRestartedSuperstep());
+            } else if (serviceWorker.checkpointFrequencyMet(superstep)) {
+                serviceWorker.storeCheckpoint();
+            }
+
+            serviceWorker.getWorkerContext().setGraphState(graphState);
+            serviceWorker.getWorkerContext().preSuperstep();
+            context.progress();
+
+            partitionStatsList.clear();
+            for (Partition<I, V, E, M> partition :
+                    serviceWorker.getPartitionMap().values()) {
+                PartitionStats partitionStats =
+                    new PartitionStats(partition.getPartitionId(), 0, 0, 0);
+                for (BasicVertex<I, V, E, M> basicVertex :
+                        partition.getVertices()) {
+                    // Make sure every vertex has the current
+                    // graphState before computing
+                    basicVertex.setGraphState(graphState);
+                    if (basicVertex.isHalted()
+                            && !Iterables.isEmpty(basicVertex.getMessages())) {
+                        basicVertex.halt = false;
+                    }
+                    if (!basicVertex.isHalted()) {
+                        Iterator<M> vertexMsgIt =
+                            basicVertex.getMessages().iterator();
+                        context.progress();
+                        basicVertex.compute(vertexMsgIt);
+                        basicVertex.releaseResources();
+                    }
+                    if (basicVertex.isHalted()) {
+                        partitionStats.incrFinishedVertexCount();
+                    }
+                    partitionStats.incrVertexCount();
+                    partitionStats.addEdgeCount(basicVertex.getNumOutEdges());
+                }
+                partitionStatsList.add(partitionStats);
+            }
+        } while (!serviceWorker.finishSuperstep(partitionStatsList));
+        if (LOG.isInfoEnabled()) {
+            LOG.info("map: BSP application done " +
+                     "(global vertices marked done)");
+        }
+
+        serviceWorker.getWorkerContext().postApplication();
+        context.progress();
+    }
+
+    @Override
+    public void cleanup(Context context)
+            throws IOException, InterruptedException {
+        if (LOG.isInfoEnabled()) {
+            LOG.info("cleanup: Starting for " + getMapFunctions());
+        }
+        if (done) {
+            return;
+        }
+
+        if (serviceWorker != null) {
+            serviceWorker.cleanup();
+        }
+        try {
+            if (masterThread != null) {
+                masterThread.join();
+            }
+        } catch (InterruptedException e) {
+            // cleanup phase -- just log the error
+            LOG.error("cleanup: Master thread couldn't join");
+        }
+        if (zkManager != null) {
+            zkManager.offlineZooKeeperServers(
+                ZooKeeperManager.State.FINISHED);
+        }
+    }
+
+    @Override
+    public void run(Context context) throws IOException, InterruptedException {
+        // Notify the master quicker if there is worker failure rather than
+        // waiting for ZooKeeper to timeout and delete the ephemeral znodes
+        try {
+            setup(context);
+            while (context.nextKeyValue()) {
+                map(context.getCurrentKey(),
+                    context.getCurrentValue(),
+                    context);
+            }
+        cleanup(context);
+        } catch (Exception e) {
+            if (mapFunctions == MapFunctions.WORKER_ONLY) {
+                serviceWorker.failureCleanup();
+            }
+            throw new IllegalStateException(
+                "run: Caught an unrecoverable exception " + e.getMessage(), e);
+        }
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/GraphState.java b/src/main/java/org/apache/giraph/graph/GraphState.java
new file mode 100644
index 0000000..d1474a9
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/GraphState.java
@@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.giraph.graph;
+
+import org.apache.giraph.comm.WorkerCommunications;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.Mapper;
+
+/*
+ * Global state of the graph.  Should be treated as a singleton (but is kept
+ * as a regular bean to facilitate ease of unit testing)
+ *
+ * @param <I> vertex id
+ * @param <V> vertex data
+ * @param <E> edge data
+ * @param <M> message data
+ */
+@SuppressWarnings("rawtypes")
+public class GraphState<I extends WritableComparable, V extends Writable,
+        E extends Writable, M extends Writable> {
+    /** Graph-wide superstep */
+    private long superstep = 0;
+    /** Graph-wide number of vertices */
+    private long numVertices = -1;
+    /** Graph-wide number of edges */
+    private long numEdges = -1;
+    /** Graph-wide map context */
+    private Mapper.Context context;
+    /** Graph-wide BSP Mapper for this Vertex */
+    private GraphMapper<I, V, E, M> graphMapper;
+    /** Graph-wide worker communications */
+    private WorkerCommunications<I, V, E, M> workerCommunications;
+
+    public long getSuperstep() {
+        return superstep;
+    }
+
+    public GraphState<I, V, E, M> setSuperstep(long superstep) {
+        this.superstep = superstep;
+        return this;
+    }
+
+    public long getNumVertices() {
+        return numVertices;
+    }
+
+    public GraphState<I, V, E, M> setNumVertices(long numVertices) {
+        this.numVertices = numVertices;
+        return this;
+    }
+
+    public long getNumEdges() {
+        return numEdges;
+    }
+
+    public GraphState<I, V, E, M> setNumEdges(long numEdges) {
+        this.numEdges = numEdges;
+        return this;
+    }
+
+    public Mapper.Context getContext() {
+        return context;
+    }
+
+    public GraphState<I, V, E ,M> setContext(Mapper.Context context) {
+        this.context = context;
+        return this;
+    }
+
+    public GraphMapper<I, V, E, M> getGraphMapper() {
+        return graphMapper;
+    }
+
+    public GraphState<I, V, E, M> setGraphMapper(
+            GraphMapper<I, V, E, M> graphMapper) {
+        this.graphMapper = graphMapper;
+        return this;
+    }
+
+    public GraphState<I, V, E, M> setWorkerCommunications(
+            WorkerCommunications<I, V, E, M> workerCommunications) {
+        this.workerCommunications = workerCommunications;
+        return this;
+    }
+
+    public WorkerCommunications<I, V, E, M> getWorkerCommunications() {
+        return workerCommunications;
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/HashMapVertex.java b/src/main/java/org/apache/giraph/graph/HashMapVertex.java
new file mode 100644
index 0000000..d2f86cd
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/HashMapVertex.java
@@ -0,0 +1,243 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import com.google.common.collect.Iterables;
+import com.google.common.collect.Lists;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.log4j.Logger;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * User applications can subclass {@link HashMapVertex}, which stores
+ * the outbound edges in a HashMap, for efficient edge random-access.  Note
+ * that {@link EdgeListVertex} is much more memory efficient for static graphs.
+ * User applications which need to implement their own
+ * in-memory data structures should subclass {@link MutableVertex}.
+ *
+ * Package access will prevent users from accessing internal methods.
+ *
+ * @param <I> Vertex index value
+ * @param <V> Vertex value
+ * @param <E> Edge value
+ * @param <M> Message value
+ */
+@SuppressWarnings("rawtypes")
+public abstract class HashMapVertex<I extends WritableComparable,
+        V extends Writable, E extends Writable, M extends Writable>
+        extends MutableVertex<I, V, E, M> {
+    /** Class logger */
+    private static final Logger LOG = Logger.getLogger(HashMapVertex.class);
+    /** Vertex id */
+    private I vertexId = null;
+    /** Vertex value */
+    private V vertexValue = null;
+    /** Map of destination vertices and their edge values */
+    protected final Map<I, Edge<I, E>> destEdgeMap =
+        new HashMap<I, Edge<I, E>>();
+    /** List of incoming messages from the previous superstep */
+    private final List<M> msgList = Lists.newArrayList();
+
+    @Override
+    public void initialize(
+            I vertexId, V vertexValue, Map<I, E> edges, Iterable<M> messages) {
+        if (vertexId != null) {
+            setVertexId(vertexId);
+        }
+        if (vertexValue != null) {
+            setVertexValue(vertexValue);
+        }
+        if (edges != null && !edges.isEmpty()) {
+            for (Map.Entry<I, E> entry : edges.entrySet()) {
+                destEdgeMap.put(
+                    entry.getKey(),
+                    new Edge<I, E>(entry.getKey(), entry.getValue()));
+            }
+        }
+        if (messages != null) {
+            Iterables.<M>addAll(msgList, messages);
+        }
+    }
+
+    @Override
+    public final boolean addEdge(I targetVertexId, E edgeValue) {
+        if (destEdgeMap.put(
+                targetVertexId,
+                new Edge<I, E>(targetVertexId, edgeValue)) != null) {
+            if (LOG.isDebugEnabled()) {
+                LOG.debug("addEdge: Vertex=" + vertexId +
+                          ": already added an edge value for dest vertex id " +
+                          targetVertexId);
+            }
+            return false;
+        } else {
+            return true;
+        }
+    }
+
+    @Override
+    public long getSuperstep() {
+        return getGraphState().getSuperstep();
+    }
+
+    @Override
+    public final void setVertexId(I vertexId) {
+        this.vertexId = vertexId;
+    }
+
+    @Override
+    public final I getVertexId() {
+        return vertexId;
+    }
+
+    @Override
+    public final V getVertexValue() {
+        return vertexValue;
+    }
+
+    @Override
+    public final void setVertexValue(V vertexValue) {
+        this.vertexValue = vertexValue;
+    }
+
+    @Override
+    public E getEdgeValue(I targetVertexId) {
+        Edge<I, E> edge = destEdgeMap.get(targetVertexId);
+        return edge != null ? edge.getEdgeValue() : null;
+    }
+
+    @Override
+    public boolean hasEdge(I targetVertexId) {
+        return destEdgeMap.containsKey(targetVertexId);
+    }
+
+    /**
+     * Get an iterator to the edges on this vertex.
+     *
+     * @return A <em>sorted</em> iterator, as defined by the sort-order
+     *         of the vertex ids
+     */
+    @Override
+    public Iterator<I> iterator() {
+        return destEdgeMap.keySet().iterator();
+    }
+
+    @Override
+    public int getNumOutEdges() {
+        return destEdgeMap.size();
+    }
+
+    @Override
+    public E removeEdge(I targetVertexId) {
+        Edge<I, E> edge = destEdgeMap.remove(targetVertexId);
+        if (edge != null) {
+            return edge.getEdgeValue();
+        } else {
+            return null;
+        }
+    }
+
+    @Override
+    public final void sendMsgToAllEdges(M msg) {
+        if (msg == null) {
+            throw new IllegalArgumentException(
+                "sendMsgToAllEdges: Cannot send null message to all edges");
+        }
+        for (Edge<I, E> edge : destEdgeMap.values()) {
+            sendMsg(edge.getDestVertexId(), msg);
+        }
+    }
+
+    @Override
+    final public void readFields(DataInput in) throws IOException {
+        vertexId = BspUtils.<I>createVertexIndex(getConf());
+        vertexId.readFields(in);
+        boolean hasVertexValue = in.readBoolean();
+        if (hasVertexValue) {
+            vertexValue = BspUtils.<V>createVertexValue(getConf());
+            vertexValue.readFields(in);
+        }
+        long edgeMapSize = in.readLong();
+        for (long i = 0; i < edgeMapSize; ++i) {
+            Edge<I, E> edge = new Edge<I, E>();
+            edge.setConf(getConf());
+            edge.readFields(in);
+            addEdge(edge.getDestVertexId(), edge.getEdgeValue());
+        }
+        long msgListSize = in.readLong();
+        for (long i = 0; i < msgListSize; ++i) {
+            M msg = BspUtils.<M>createMessageValue(getConf());
+            msg.readFields(in);
+            msgList.add(msg);
+        }
+        halt = in.readBoolean();
+    }
+
+    @Override
+    final public void write(DataOutput out) throws IOException {
+        vertexId.write(out);
+        out.writeBoolean(vertexValue != null);
+        if (vertexValue != null) {
+            vertexValue.write(out);
+        }
+        out.writeLong(destEdgeMap.size());
+        for (Edge<I, E> edge : destEdgeMap.values()) {
+            edge.write(out);
+        }
+        out.writeLong(msgList.size());
+        for (M msg : msgList) {
+            msg.write(out);
+        }
+        out.writeBoolean(halt);
+    }
+
+    @Override
+    void putMessages(Iterable<M> messages) {
+        msgList.clear();
+        for (M message : messages) {
+            msgList.add(message);
+        }
+    }
+
+    @Override
+    public Iterable<M> getMessages() {
+        return Iterables.unmodifiableIterable(msgList);
+    }
+
+    @Override
+    void releaseResources() {
+        // Hint to GC to free the messages
+        msgList.clear();
+    }
+
+    @Override
+    public String toString() {
+        return "Vertex(id=" + getVertexId() + ",value=" + getVertexValue() +
+            ",#edges=" + destEdgeMap.size() + ")";
+    }
+}
+
diff --git a/src/main/java/org/apache/giraph/graph/IntIntNullIntVertex.java b/src/main/java/org/apache/giraph/graph/IntIntNullIntVertex.java
new file mode 100644
index 0000000..2d6b3c5
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/IntIntNullIntVertex.java
@@ -0,0 +1,164 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import com.google.common.collect.Iterables;
+import org.apache.giraph.utils.UnmodifiableIntArrayIterator;
+import org.apache.hadoop.io.IntWritable;
+import org.apache.hadoop.io.NullWritable;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Map;
+
+/**
+ * Simple implementation of {@link BasicVertex} using an int as id, value and message.
+ * Edges are immutable and unweighted. This class aims to be as memory efficient as possible.
+ */
+public abstract class IntIntNullIntVertex extends
+        BasicVertex<IntWritable, IntWritable, NullWritable,IntWritable> {
+
+    private int id;
+    private int value;
+
+    private int[] neighbors;
+    private int[] messages;
+
+    @Override
+    public void initialize(IntWritable vertexId, IntWritable vertexValue,
+            Map<IntWritable, NullWritable> edges,
+            Iterable<IntWritable> messages) {
+        id = vertexId.get();
+        value = vertexValue.get();
+        this.neighbors = new int[edges.size()];
+        int n = 0;
+        for (IntWritable neighbor : edges.keySet()) {
+            this.neighbors[n++] = neighbor.get();
+        }
+        this.messages = new int[Iterables.size(messages)];
+        n = 0;
+        for (IntWritable message : messages) {
+            this.messages[n++] = message.get();
+        }
+    }
+
+    @Override
+    public IntWritable getVertexId() {
+        return new IntWritable(id);
+    }
+
+    @Override
+    public IntWritable getVertexValue() {
+        return new IntWritable(value);
+    }
+
+    @Override
+    public void setVertexValue(IntWritable vertexValue) {
+        value = vertexValue.get();
+    }
+
+    @Override
+    public Iterator<IntWritable> iterator() {
+        return new UnmodifiableIntArrayIterator(neighbors);
+    }
+
+    @Override
+    public NullWritable getEdgeValue(IntWritable targetVertexId) {
+        return NullWritable.get();
+    }
+
+    @Override
+    public boolean hasEdge(IntWritable targetVertexId) {
+        for (int neighbor : neighbors) {
+            if (neighbor == targetVertexId.get()) {
+                return true;
+            }
+        }
+        return false;
+    }
+
+    @Override
+    public int getNumOutEdges() {
+        return neighbors.length;
+    }
+
+    @Override
+    public void sendMsgToAllEdges(final IntWritable message) {
+        for (int neighbor : neighbors) {
+            sendMsg(new IntWritable(neighbor), message);
+        }
+    }
+
+    @Override
+    public Iterable<IntWritable> getMessages() {
+        return new Iterable<IntWritable>() {
+            @Override
+            public Iterator<IntWritable> iterator() {
+                return new UnmodifiableIntArrayIterator(messages);
+            }
+        };
+    }
+
+    @Override
+    public void putMessages(Iterable<IntWritable> newMessages) {
+        messages = new int[Iterables.size(newMessages)];
+        int n = 0;
+        for (IntWritable message : newMessages) {
+            messages[n++] = message.get();
+        }
+    }
+
+    @Override
+    void releaseResources() {
+        messages = new int[0];
+    }
+
+    @Override
+    public void write(final DataOutput out) throws IOException {
+        out.writeInt(id);
+        out.writeInt(value);
+        out.writeInt(neighbors.length);
+        for (int n = 0; n < neighbors.length; n++) {
+            out.writeInt(neighbors[n]);
+        }
+        out.writeInt(messages.length);
+        for (int n = 0; n < messages.length; n++) {
+            out.writeInt(messages[n]);
+        }
+    }
+
+    @Override
+    public void readFields(DataInput in) throws IOException {
+        id = in.readInt();
+        value = in.readInt();
+        int numEdges = in.readInt();
+        neighbors = new int[numEdges];
+        for (int n = 0; n < numEdges; n++) {
+            neighbors[n] = in.readInt();
+        }
+        int numMessages = in.readInt();
+        messages = new int[numMessages];
+        for (int n = 0; n < numMessages; n++) {
+            messages[n] = in.readInt();
+        }
+    }
+
+}
diff --git a/src/main/java/org/apache/giraph/graph/LongDoubleFloatDoubleVertex.java b/src/main/java/org/apache/giraph/graph/LongDoubleFloatDoubleVertex.java
new file mode 100644
index 0000000..b875736
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/LongDoubleFloatDoubleVertex.java
@@ -0,0 +1,312 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.giraph.graph;
+
+import com.google.common.collect.UnmodifiableIterator;
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.FloatWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.log4j.Logger;
+import org.apache.mahout.math.function.DoubleProcedure;
+import org.apache.mahout.math.function.LongFloatProcedure;
+import org.apache.mahout.math.function.LongProcedure;
+import org.apache.mahout.math.list.DoubleArrayList;
+import org.apache.mahout.math.map.OpenLongFloatHashMap;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Map;
+
+public abstract class LongDoubleFloatDoubleVertex extends
+        MutableVertex<LongWritable, DoubleWritable, FloatWritable,
+        DoubleWritable> {
+    /** Class logger */
+    private static final Logger LOG =
+        Logger.getLogger(LongDoubleFloatDoubleVertex.class);
+
+    private long vertexId;
+    private double vertexValue;
+    private OpenLongFloatHashMap verticesWithEdgeValues =
+        new OpenLongFloatHashMap();
+    private DoubleArrayList messageList = new DoubleArrayList();
+
+    @Override
+    public void initialize(LongWritable vertexIdW, DoubleWritable vertexValueW,
+                           Map<LongWritable, FloatWritable> edgesW,
+                           Iterable<DoubleWritable> messagesW) {
+        if (vertexIdW != null ) {
+            vertexId = vertexIdW.get();
+        }
+        if (vertexValueW != null) {
+            vertexValue = vertexValueW.get();
+        }
+        if (edgesW != null) {
+            for (Map.Entry<LongWritable, FloatWritable> entry :
+                    edgesW.entrySet()) {
+                verticesWithEdgeValues.put(entry.getKey().get(),
+                                           entry.getValue().get());
+            }
+        }
+        if (messagesW != null) {
+            for(DoubleWritable m : messagesW) {
+                messageList.add(m.get());
+            }
+        }
+    }
+
+    @Override
+    public final boolean addEdge(LongWritable targetId,
+                                 FloatWritable edgeValue) {
+        if (verticesWithEdgeValues.put(targetId.get(), edgeValue.get())) {
+            if (LOG.isDebugEnabled()) {
+                LOG.debug("addEdge: Vertex=" + vertexId +
+                        ": already added an edge value for dest vertex id " +
+                        targetId.get());
+            }
+            return false;
+        } else {
+            return true;
+        }
+    }
+
+    @Override
+    public FloatWritable removeEdge(LongWritable targetVertexId) {
+        long target = targetVertexId.get();
+        if (verticesWithEdgeValues.containsKey(target)) {
+            float value = verticesWithEdgeValues.get(target);
+            verticesWithEdgeValues.removeKey(target);
+            return new FloatWritable(value);
+        } else {
+            return null;
+        }
+    }
+
+    @Override
+    public final void setVertexId(LongWritable vertexId) {
+        this.vertexId = vertexId.get();
+    }
+
+    @Override
+    public final LongWritable getVertexId() {
+        // TODO: possibly not make new objects every time?
+        return new LongWritable(vertexId);
+    }
+
+    @Override
+    public final DoubleWritable getVertexValue() {
+        return new DoubleWritable(vertexValue);
+    }
+
+    @Override
+    public final void setVertexValue(DoubleWritable vertexValue) {
+        this.vertexValue = vertexValue.get();
+    }
+
+    @Override
+    public final void sendMsg(LongWritable id, DoubleWritable msg) {
+        if (msg == null) {
+            throw new IllegalArgumentException(
+                    "sendMsg: Cannot send null message to " + id);
+        }
+        getGraphState().getWorkerCommunications().sendMessageReq(id, msg);
+    }
+
+    @Override
+    public final void sendMsgToAllEdges(final DoubleWritable msg) {
+        if (msg == null) {
+            throw new IllegalArgumentException(
+                "sendMsgToAllEdges: Cannot send null message to all edges");
+        }
+        final MutableVertex<LongWritable, DoubleWritable, FloatWritable,
+            DoubleWritable> vertex = this;
+        verticesWithEdgeValues.forEachKey(new LongProcedure() {
+            @Override
+            public boolean apply(long destVertexId) {
+                vertex.sendMsg(new LongWritable(destVertexId), msg);
+                return true;
+            }
+        });
+    }
+
+    @Override
+    public long getNumVertices() {
+        return getGraphState().getNumVertices();
+    }
+
+    @Override
+    public long getNumEdges() {
+        return getGraphState().getNumEdges();
+    }
+
+    @Override
+    public Iterator<LongWritable> iterator() {
+        final long[] destVertices = verticesWithEdgeValues.keys().elements();
+        final int destVerticesSize = verticesWithEdgeValues.size();
+        return new Iterator<LongWritable>() {
+            int offset = 0;
+            @Override public boolean hasNext() {
+                return offset < destVerticesSize;
+            }
+
+            @Override public LongWritable next() {
+                return new LongWritable(destVertices[offset++]);
+            }
+
+            @Override public void remove() {
+                throw new UnsupportedOperationException(
+                    "Mutation disallowed for edge list via iterator");
+            }
+        };
+    }
+
+    @Override
+    public FloatWritable getEdgeValue(LongWritable targetVertexId) {
+        return new FloatWritable(
+            verticesWithEdgeValues.get(targetVertexId.get()));
+    }
+
+    @Override
+    public boolean hasEdge(LongWritable targetVertexId) {
+        return verticesWithEdgeValues.containsKey(targetVertexId.get());
+    }
+
+    @Override
+    public int getNumOutEdges() {
+        return verticesWithEdgeValues.size();
+    }
+
+    @Override
+    public long getSuperstep() {
+        return getGraphState().getSuperstep();
+    }
+
+    @Override
+    final public void readFields(DataInput in) throws IOException {
+        vertexId = in.readLong();
+        vertexValue = in.readDouble();
+        long edgeMapSize = in.readLong();
+        for (long i = 0; i < edgeMapSize; ++i) {
+            long destVertexId = in.readLong();
+            float edgeValue = in.readFloat();
+            verticesWithEdgeValues.put(destVertexId, edgeValue);
+        }
+        long msgListSize = in.readLong();
+        for (long i = 0; i < msgListSize; ++i) {
+            messageList.add(in.readDouble());
+        }
+        halt = in.readBoolean();
+    }
+
+    @Override
+    public final void write(final DataOutput out) throws IOException {
+        out.writeLong(vertexId);
+        out.writeDouble(vertexValue);
+        out.writeLong(verticesWithEdgeValues.size());
+        verticesWithEdgeValues.forEachPair(new LongFloatProcedure() {
+            @Override
+            public boolean apply(long destVertexId, float edgeValue) {
+                try {
+                    out.writeLong(destVertexId);
+                    out.writeFloat(edgeValue);
+                } catch (IOException e) {
+                    throw new IllegalStateException(
+                        "apply: IOException when not allowed", e);
+                }
+                return true;
+            }
+        });
+        out.writeLong(messageList.size());
+        messageList.forEach(new DoubleProcedure() {
+             @Override
+             public boolean apply(double message) {
+                 try {
+                     out.writeDouble(message);
+                 } catch (IOException e) {
+                     throw new IllegalStateException(
+                         "apply: IOException when not allowed", e);
+                 }
+                 return true;
+             }
+        });
+        out.writeBoolean(halt);
+    }
+
+    @Override
+    void putMessages(Iterable<DoubleWritable> messages) {
+        messageList.clear();
+        for (DoubleWritable message : messages) {
+            messageList.add(message.get());
+        }
+    }
+
+    @Override
+    void releaseResources() {
+        // Hint to GC to free the messages
+        messageList.clear();
+    }
+
+    @Override
+    public Iterable<DoubleWritable> getMessages() {
+        return new UnmodifiableDoubleWritableIterable(messageList);
+    }
+
+    @Override
+    public String toString() {
+        return "Vertex(id=" + getVertexId() + ",value=" + getVertexValue() +
+                ",#edges=" + getNumOutEdges() + ")";
+    }
+
+    private class UnmodifiableDoubleWritableIterable
+            implements Iterable<DoubleWritable> {
+
+        private final DoubleArrayList elementList;
+
+        public UnmodifiableDoubleWritableIterable(
+                DoubleArrayList elementList) {
+            this.elementList = elementList;
+        }
+
+        @Override
+        public Iterator<DoubleWritable> iterator() {
+            return new UnmodifiableDoubleWritableIterator(
+                    elementList);
+        }
+    }
+
+    private class UnmodifiableDoubleWritableIterator
+            extends UnmodifiableIterator<DoubleWritable> {
+        private final DoubleArrayList elementList;
+        private int offset = 0;
+
+        UnmodifiableDoubleWritableIterator(DoubleArrayList elementList) {
+            this.elementList = elementList;
+        }
+
+        @Override
+        public boolean hasNext() {
+            return offset < elementList.size();
+        }
+
+        @Override
+        public DoubleWritable next() {
+            return new DoubleWritable(elementList.get(offset++));
+        }
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/MasterThread.java b/src/main/java/org/apache/giraph/graph/MasterThread.java
new file mode 100644
index 0000000..2bd2d96
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/MasterThread.java
@@ -0,0 +1,180 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.TreeMap;
+
+import org.apache.giraph.bsp.ApplicationState;
+import org.apache.giraph.bsp.CentralizedServiceMaster;
+import org.apache.giraph.bsp.SuperstepState;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.Mapper.Context;
+import org.apache.log4j.Logger;
+
+/**
+ * Master thread that will coordinate the activities of the tasks.  It runs
+ * on all task processes, however, will only execute its algorithm if it knows
+ * it is the "leader" from ZooKeeper.
+ */
+@SuppressWarnings("rawtypes")
+public class MasterThread<I extends WritableComparable,
+                          V extends Writable,
+                          E extends Writable,
+                          M extends Writable> extends Thread {
+    /** Class logger */
+    private static final Logger LOG = Logger.getLogger(MasterThread.class);
+    /** Reference to shared BspService */
+    private CentralizedServiceMaster<I, V, E, M> bspServiceMaster = null;
+    /** Context (for counters) */
+    private final Context context;
+    /** Use superstep counters? */
+    private final boolean superstepCounterOn;
+    /** Setup seconds */
+    private double setupSecs = 0d;
+    /** Superstep timer (in seconds) map */
+    private final Map<Long, Double> superstepSecsMap =
+        new TreeMap<Long, Double>();
+
+    /** Counter group name for the Giraph timers */
+    public String GIRAPH_TIMERS_COUNTER_GROUP_NAME = "Giraph Timers";
+
+    /**
+     *  Constructor.
+     *
+     *  @param bspServiceMaster Master that already exists and setup() has
+     *         been called.
+     */
+    MasterThread(BspServiceMaster<I, V, E, M> bspServiceMaster,
+                 Context context) {
+        super(MasterThread.class.getName());
+        this.bspServiceMaster = bspServiceMaster;
+        this.context = context;
+        superstepCounterOn = context.getConfiguration().getBoolean(
+            GiraphJob.USE_SUPERSTEP_COUNTERS,
+            GiraphJob.USE_SUPERSTEP_COUNTERS_DEFAULT);
+    }
+
+    /**
+     * The master algorithm.  The algorithm should be able to withstand
+     * failures and resume as necessary since the master may switch during a
+     * job.
+     */
+    @Override
+    public void run() {
+        // Algorithm:
+        // 1. Become the master
+        // 2. If desired, restart from a manual checkpoint
+        // 3. Run all supersteps until complete
+        try {
+            long startMillis = System.currentTimeMillis();
+            long endMillis = 0;
+            bspServiceMaster.setup();
+            if (bspServiceMaster.becomeMaster() == true) {
+                // Attempt to create InputSplits if necessary. Bail out if that fails.
+                if (bspServiceMaster.getRestartedSuperstep() != BspService.UNSET_SUPERSTEP
+                        || bspServiceMaster.createInputSplits() != -1) {
+                    long setupMillis = (System.currentTimeMillis() - startMillis);
+                    context.getCounter(GIRAPH_TIMERS_COUNTER_GROUP_NAME,
+                            "Setup (milliseconds)").
+                            increment(setupMillis);
+                    setupSecs = setupMillis / 1000.0d;
+                    SuperstepState superstepState = SuperstepState.INITIAL;
+                    long cachedSuperstep = BspService.UNSET_SUPERSTEP;
+                    while (superstepState != SuperstepState.ALL_SUPERSTEPS_DONE) {
+                        long startSuperstepMillis = System.currentTimeMillis();
+                        cachedSuperstep = bspServiceMaster.getSuperstep();
+                        superstepState = bspServiceMaster.coordinateSuperstep();
+                        long superstepMillis = System.currentTimeMillis() -
+                                startSuperstepMillis;
+                        superstepSecsMap.put(new Long(cachedSuperstep),
+                                superstepMillis / 1000.0d);
+                        if (LOG.isInfoEnabled()) {
+                            LOG.info("masterThread: Coordination of superstep " +
+                                    cachedSuperstep + " took " +
+                                    superstepMillis / 1000.0d +
+                                    " seconds ended with state " + superstepState +
+                                    " and is now on superstep " +
+                                    bspServiceMaster.getSuperstep());
+                        }
+                        if (superstepCounterOn) {
+                            String counterPrefix;
+                            if (cachedSuperstep == -1) {
+                                counterPrefix = "Vertex input superstep";
+                            } else {
+                                counterPrefix = "Superstep " + cachedSuperstep;
+                            }
+                            context.getCounter(GIRAPH_TIMERS_COUNTER_GROUP_NAME,
+                                    counterPrefix +
+                                    " (milliseconds)").
+                                    increment(superstepMillis);
+                        }
+
+                        // If a worker failed, restart from a known good superstep
+                        if (superstepState == SuperstepState.WORKER_FAILURE) {
+                            bspServiceMaster.restartFromCheckpoint(
+                                    bspServiceMaster.getLastGoodCheckpoint());
+                        }
+                        endMillis = System.currentTimeMillis();
+                    }
+                    bspServiceMaster.setJobState(ApplicationState.FINISHED, -1, -1);
+                }
+            }
+            bspServiceMaster.cleanup();
+            if (!superstepSecsMap.isEmpty()) {
+                context.getCounter(
+                        GIRAPH_TIMERS_COUNTER_GROUP_NAME,
+                        "Shutdown (milliseconds)").
+                        increment(System.currentTimeMillis() - endMillis);
+                if (LOG.isInfoEnabled()) {
+                    LOG.info("setup: Took " + setupSecs + " seconds.");
+                }
+                for (Entry<Long, Double> entry : superstepSecsMap.entrySet()) {
+                    if (LOG.isInfoEnabled()) {
+                        if (entry.getKey().longValue() ==
+                                BspService.INPUT_SUPERSTEP) {
+                            LOG.info("vertex input superstep: Took " +
+                                     entry.getValue() + " seconds.");
+                        } else {
+                            LOG.info("superstep " + entry.getKey() + ": Took " +
+                                     entry.getValue() + " seconds.");
+                        }
+                    }
+                }
+                if (LOG.isInfoEnabled()) {
+                    LOG.info("shutdown: Took " +
+                             (System.currentTimeMillis() - endMillis) /
+                             1000.0d + " seconds.");
+                    LOG.info("total: Took " +
+                             ((System.currentTimeMillis() / 1000.0d) -
+                             setupSecs) + " seconds.");
+                }
+                context.getCounter(
+                    GIRAPH_TIMERS_COUNTER_GROUP_NAME,
+                    "Total (milliseconds)").
+                    increment(System.currentTimeMillis() - startMillis);
+            }
+        } catch (Exception e) {
+            LOG.error("masterThread: Master algorithm failed: ", e);
+            throw new RuntimeException(e);
+        }
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/MutableVertex.java b/src/main/java/org/apache/giraph/graph/MutableVertex.java
new file mode 100644
index 0000000..dcfd9ae
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/MutableVertex.java
@@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+
+import java.io.IOException;
+import java.util.Map;
+
+/**
+ * Interface used by VertexReader to set the properties of a new vertex
+ * or mutate the graph.
+ */
+@SuppressWarnings("rawtypes")
+public abstract class MutableVertex<I extends WritableComparable,
+        V extends Writable, E extends Writable, M extends Writable>
+        extends BasicVertex<I, V, E, M> {
+    /**
+     * Set the vertex id
+     *
+     * @param id Vertex id is set to this (instantiated by the user)
+     */
+    public abstract void setVertexId(I id);
+
+    /**
+     * Add an edge for this vertex (happens immediately)
+     *
+     * @param targetVertexId target vertex
+     * @param edgeValue value of the edge
+     * @return Return true if succeeded, false otherwise
+     */
+    public abstract boolean addEdge(I targetVertexId, E edgeValue);
+
+    /**
+     * Removes an edge for this vertex (happens immediately).
+     *
+     * @param targetVertexId the target vertex id of the edge to be removed.
+     * @return the value of the edge which was removed (or null if no
+     *         edge existed to targetVertexId)
+     */
+    public abstract E removeEdge(I targetVertexId);
+
+    /**
+     * Create a vertex to add to the graph.  Calls initialize() for the vertex
+     * as well.
+     *
+     * @return A new vertex for adding to the graph
+     */
+    public BasicVertex<I, V, E, M> instantiateVertex(
+        I vertexId, V vertexValue, Map<I, E> edges, Iterable<M> messages) {
+        MutableVertex<I, V, E, M> mutableVertex =
+            (MutableVertex<I, V, E, M>) BspUtils
+               .<I, V, E, M>createVertex(getContext().getConfiguration());
+        mutableVertex.setGraphState(getGraphState());
+        mutableVertex.initialize(vertexId, vertexValue, edges, messages);
+        return mutableVertex;
+    }
+
+    /**
+     * Sends a request to create a vertex that will be available during the
+     * next superstep.  Use instantiateVertex() to do the instantiation.
+     *
+     * @param vertex User created vertex
+     */
+    public void addVertexRequest(BasicVertex<I, V, E, M> vertex)
+            throws IOException {
+        getGraphState().getWorkerCommunications().
+            addVertexReq(vertex);
+    }
+
+    /**
+     * Request to remove a vertex from the graph
+     * (applied just prior to the next superstep).
+     *
+     * @param vertexId Id of the vertex to be removed.
+     */
+    public void removeVertexRequest(I vertexId) throws IOException {
+        getGraphState().getWorkerCommunications().
+        removeVertexReq(vertexId);
+    }
+
+    /**
+     * Request to add an edge of a vertex in the graph
+     * (processed just prior to the next superstep)
+     *
+     * @param sourceVertexId Source vertex id of edge
+     * @param edge Edge to add
+     */
+    public void addEdgeRequest(I sourceVertexId, Edge<I, E> edge)
+            throws IOException {
+        getGraphState().getWorkerCommunications().
+            addEdgeReq(sourceVertexId, edge);
+    }
+
+    /**
+     * Request to remove an edge of a vertex from the graph
+     * (processed just prior to the next superstep).
+     *
+     * @param sourceVertexId Source vertex id of edge
+     * @param destVertexId Destination vertex id of edge
+     */
+    public void removeEdgeRequest(I sourceVertexId, I destVertexId)
+            throws IOException {
+        getGraphState().getWorkerCommunications().
+            removeEdgeReq(sourceVertexId, destVertexId);
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/TextAggregatorWriter.java b/src/main/java/org/apache/giraph/graph/TextAggregatorWriter.java
new file mode 100644
index 0000000..113ba2e
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/TextAggregatorWriter.java
@@ -0,0 +1,121 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import java.io.IOException;
+import java.util.Map;
+import java.util.Map.Entry;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.mapreduce.Mapper.Context;
+
+/**
+ * Default implementation of {@link AggregatorWriter}. Each line consists of 
+ * text and contains the aggregator name, the aggregator value and the 
+ * aggregator class.
+ */
+public class TextAggregatorWriter 
+        implements AggregatorWriter {
+    /** The filename of the outputfile */
+    public static final String FILENAME = 
+        "giraph.textAggregatorWriter.filename";
+    /** The frequency of writing:
+     *  - NEVER: never write, files aren't created at all
+     *  - AT_THE_END: aggregators are written only when the computation is over
+     *  - int: i.e. 1 is every superstep, 2 every two supersteps and so on 
+     */
+    public static final String FREQUENCY = 
+        "giraph.textAggregatorWriter.frequency";
+    private static final String DEFAULT_FILENAME = "aggregatorValues";
+    /** Signal for "never write" frequency */
+    public static final int NEVER = 0;
+    /** Signal for "write only the final values" frequency */
+    public static final int AT_THE_END = -1;
+    /** Handle to the outputfile */
+    protected FSDataOutputStream output;
+    private int frequency;
+    
+    @Override
+    @SuppressWarnings("rawtypes")
+    public void initialize(Context context, long attempt) throws IOException {
+        Configuration conf = context.getConfiguration();
+        frequency = conf.getInt(FREQUENCY, NEVER);
+        String filename  = conf.get(FILENAME, DEFAULT_FILENAME);
+        if (frequency != NEVER) {
+            Path p = new Path(filename+"_"+attempt);
+            FileSystem fs = FileSystem.get(conf);
+            if (fs.exists(p)) {
+                throw new RuntimeException("aggregatorWriter file already" +
+                    " exists: " + p.getName());
+            }
+            output = fs.create(p);
+        }
+    }
+
+    @Override
+    final public void writeAggregator(
+            Map<String, Aggregator<Writable>> aggregators,
+            long superstep) throws IOException {
+        
+        if (shouldWrite(superstep)) {
+            for (Entry<String, Aggregator<Writable>> a: 
+                    aggregators.entrySet()) {
+                output.writeUTF(aggregatorToString(a.getKey(), 
+                                                   a.getValue(), 
+                                                   superstep));
+            }
+            output.flush();
+        }
+    }
+    
+    /**
+     * Implements the way an aggregator is converted into a String.
+     * Override this if you want to implement your own text format.
+     * 
+     * @param aggregatorName Name of the aggregator
+     * @param a Aggregator
+     * @param superstep Current superstep
+     * @return The String representation for the aggregator
+     */
+    protected String aggregatorToString(String aggregatorName, 
+                                        Aggregator<Writable> a,
+                                        long superstep) {
+
+        return new StringBuilder("superstep=").append(superstep).append("\t")
+            .append(aggregatorName).append("=").append(a.getAggregatedValue())
+            .append("\t").append(a.getClass().getCanonicalName()).append("\n")
+            .toString();
+    }
+
+    private boolean shouldWrite(long superstep) {
+        return ((frequency == AT_THE_END && superstep == LAST_SUPERSTEP) ||
+                (frequency != NEVER && superstep % frequency == 0));
+    }
+
+    @Override
+    public void close() throws IOException {
+        if (output != null) {
+            output.close();
+        }
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/VertexChanges.java b/src/main/java/org/apache/giraph/graph/VertexChanges.java
new file mode 100644
index 0000000..28dbdde
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/VertexChanges.java
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import java.util.List;
+
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * Structure to hold all the possible graph mutations that can occur during a
+ * superstep.
+ *
+ * @param <I> Vertex index value
+ * @param <V> Vertex value
+ * @param <E> Edge value
+ * @param <M> Message value
+ */
+@SuppressWarnings("rawtypes")
+public interface VertexChanges<
+        I extends WritableComparable,
+        V extends Writable,
+        E extends Writable,
+        M extends Writable> {
+
+    /**
+     * Get the added vertices for this particular vertex index from the previous
+     * superstep.
+     *
+     * @return List of vertices for this vertex index.
+     */
+    List<BasicVertex<I, V, E, M>> getAddedVertexList();
+
+    /**
+     * Get the number of times this vertex was removed in the previous
+     * superstep.
+     *
+     * @return Count of time this vertex was removed in the previous superstep
+     */
+    int getRemovedVertexCount();
+
+    /**
+     * Get the added edges for this particular vertex index from the previous
+     * superstep
+     *
+     * @return List of added edges for this vertex index
+     */
+    List<Edge<I, E>> getAddedEdgeList();
+
+    /**
+     * Get the removed edges by their destination vertex index.
+     *
+     * @return List of destination edges for removal from this vertex index
+     */
+    List<I> getRemovedEdgeList();
+}
diff --git a/src/main/java/org/apache/giraph/graph/VertexCombiner.java b/src/main/java/org/apache/giraph/graph/VertexCombiner.java
new file mode 100644
index 0000000..7d39c10
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/VertexCombiner.java
@@ -0,0 +1,48 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import java.io.IOException;
+
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * Abstract class to extend for combining of messages sent to the same vertex.
+ *
+ * @param <I extends Writable> index
+ * @param <M extends Writable> message data
+ */
+@SuppressWarnings("rawtypes")
+public abstract class VertexCombiner<I extends WritableComparable,
+                                     M extends Writable> {
+
+   /**
+    * Combines message values for a particular vertex index.
+    *
+    * @param vertexIndex Index of the vertex getting these messages
+    * @param messages Iterable of the messages to be combined
+    * @return Iterable of the combined messages. The returned value cannot 
+    *         be null and its size is required to be smaller or equal to 
+    *         the size of {@link messages}.
+    * @throws IOException
+    */
+    public abstract Iterable<M> combine(I vertexIndex,
+            Iterable<M> messages) throws IOException;
+}
diff --git a/src/main/java/org/apache/giraph/graph/VertexEdgeCount.java b/src/main/java/org/apache/giraph/graph/VertexEdgeCount.java
new file mode 100644
index 0000000..a8bece3
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/VertexEdgeCount.java
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+/**
+ * Simple immutable structure for storing a final vertex and edge count.
+ */
+public class VertexEdgeCount {
+    /** Immutable vertices */
+    private final long vertexCount;
+    /** Immutable edges */
+    private final long edgeCount;
+
+    public VertexEdgeCount() {
+        vertexCount = 0;
+        edgeCount = 0;
+    }
+
+    public VertexEdgeCount(long vertexCount, long edgeCount) {
+        this.vertexCount = vertexCount;
+        this.edgeCount = edgeCount;
+    }
+
+    public long getVertexCount() {
+        return vertexCount;
+    }
+
+    public long getEdgeCount() {
+        return edgeCount;
+    }
+
+    public VertexEdgeCount incrVertexEdgeCount(
+            VertexEdgeCount vertexEdgeCount) {
+        return new VertexEdgeCount(
+            vertexCount + vertexEdgeCount.getVertexCount(),
+            edgeCount + vertexEdgeCount.getEdgeCount());
+    }
+
+    public VertexEdgeCount incrVertexEdgeCount(
+            long vertexCount, long edgeCount) {
+        return new VertexEdgeCount(
+            this.vertexCount + vertexCount,
+            this.edgeCount + edgeCount);
+    }
+
+    @Override
+    public String toString() {
+        return "(v=" + getVertexCount() + ", e=" + getEdgeCount() + ")";
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/VertexInputFormat.java b/src/main/java/org/apache/giraph/graph/VertexInputFormat.java
new file mode 100644
index 0000000..0b1a86f
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/VertexInputFormat.java
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+
+import java.io.IOException;
+import java.util.List;
+
+/**
+ * Use this to load data for a BSP application.  Note that the InputSplit must
+ * also implement Writable.  The InputSplits will determine the partitioning of
+ * vertices across the mappers, so keep that in consideration when implementing
+ * getSplits().
+ *
+ * @param <I> Vertex id
+ * @param <V> Vertex value
+ * @param <E> Edge value
+ */
+@SuppressWarnings("rawtypes")
+public abstract class VertexInputFormat<I extends WritableComparable,
+        V extends Writable, E extends Writable, M extends Writable> {
+
+    /**
+     * Logically split the vertices for a graph processing application.
+     *
+     * Each {@link InputSplit} is then assigned to a worker for processing.
+     *
+     * <p><i>Note</i>: The split is a <i>logical</i> split of the inputs and the
+     * input files are not physically split into chunks. For e.g. a split could
+     * be <i>&lt;input-file-path, start, offset&gt;</i> tuple. The InputFormat
+     * also creates the {@link VertexReader} to read the {@link InputSplit}.
+     *
+     * Also, the number of workers is a hint given to the developer to try to
+     * intelligently determine how many splits to create (if this is
+     * adjustable) at runtime.
+     *
+     * @param context Context of the job
+     * @param numWorkers Number of workers used for this job
+     * @return an array of {@link InputSplit}s for the job.
+     */
+    public abstract List<InputSplit> getSplits(
+        JobContext context, int numWorkers)
+        throws IOException, InterruptedException;
+
+    /**
+     * Create a vertex reader for a given split. The framework will call
+     * {@link VertexReader#initialize(InputSplit, TaskAttemptContext)} before
+     * the split is used.
+     *
+     * @param split the split to be read
+     * @param context the information about the task
+     * @return a new record reader
+     * @throws IOException
+     * @throws InterruptedException
+     */
+    public abstract VertexReader<I, V, E, M> createVertexReader(
+        InputSplit split,
+        TaskAttemptContext context) throws IOException;
+}
diff --git a/src/main/java/org/apache/giraph/graph/VertexMutations.java b/src/main/java/org/apache/giraph/graph/VertexMutations.java
new file mode 100644
index 0000000..f201a2e
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/VertexMutations.java
@@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.json.JSONException;
+import org.json.JSONObject;
+
+/**
+ * Structure to hold all the possible graph mutations that can occur during a
+ * superstep.
+ *
+ * @param <I> Vertex index value
+ * @param <V> Vertex value
+ * @param <E> Edge value
+ * @param <M> Message value
+ */
+@SuppressWarnings("rawtypes")
+public class VertexMutations<
+        I extends WritableComparable,
+        V extends Writable,
+        E extends Writable,
+        M extends Writable> implements VertexChanges<I, V, E, M> {
+    /** List of added vertices during the last superstep */
+    private final List<BasicVertex<I, V, E, M>> addedVertexList =
+        new ArrayList<BasicVertex<I, V, E, M>>();
+    /** Count of remove vertex requests */
+    private int removedVertexCount = 0;
+    /** List of added edges */
+    private final List<Edge<I, E>> addedEdgeList = new ArrayList<Edge<I, E>>();
+    /** List of removed edges */
+    private final List<I> removedEdgeList = new ArrayList<I>();
+
+    @Override
+    public List<BasicVertex<I, V, E, M>> getAddedVertexList() {
+        return addedVertexList;
+    }
+
+    /**
+     * Add a vertex mutation
+     *
+     * @param vertex Vertex to be added
+     */
+    public void addVertex(BasicVertex<I, V, E, M> vertex) {
+        addedVertexList.add(vertex);
+    }
+
+    @Override
+    public int getRemovedVertexCount() {
+        return removedVertexCount;
+    }
+
+    /**
+     * Removed a vertex mutation (increments a count)
+     */
+    public void removeVertex() {
+        ++removedVertexCount;
+    }
+
+    @Override
+    public List<Edge<I, E>> getAddedEdgeList() {
+        return addedEdgeList;
+    }
+
+    /**
+     * Add an edge to this vertex
+     *
+     * @param edge Edge to be added
+     */
+    public void addEdge(Edge<I, E> edge) {
+        addedEdgeList.add(edge);
+    }
+
+    @Override
+    public List<I> getRemovedEdgeList() {
+        return removedEdgeList;
+    }
+
+    /**
+     * Remove an edge on this vertex
+     *
+     * @param destinationVertexId Vertex index of the destination of the edge
+     */
+    public void removeEdge(I destinationVertexId) {
+        removedEdgeList.add(destinationVertexId);
+    }
+
+    @Override
+    public String toString() {
+        JSONObject jsonObject = new JSONObject();
+        try {
+            jsonObject.put("added vertices", getAddedVertexList().toString());
+            jsonObject.put("added edges", getAddedEdgeList().toString());
+            jsonObject.put("removed vertex count", getRemovedVertexCount());
+            jsonObject.put("removed edges", getRemovedEdgeList().toString());
+            return jsonObject.toString();
+        } catch (JSONException e) {
+            throw new IllegalStateException("toString: Got a JSON exception",
+                                            e);
+        }
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/VertexOutputFormat.java b/src/main/java/org/apache/giraph/graph/VertexOutputFormat.java
new file mode 100644
index 0000000..28078ad
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/VertexOutputFormat.java
@@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import java.io.IOException;
+
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.OutputCommitter;
+
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+
+/**
+ * Implement to output the graph after the computation.  It is modeled
+ * directly after the Hadoop OutputFormat.
+ *
+ * @param <I> Vertex index value
+ * @param <V> Vertex value
+ * @param <E> Edge value
+ */
+@SuppressWarnings("rawtypes")
+public abstract class VertexOutputFormat<
+        I extends WritableComparable, V extends Writable, E extends Writable> {
+    /**
+     * Create a vertex writer for a given split. The framework will call
+     * {@link VertexReader#initialize(InputSplit, TaskAttemptContext)} before
+     * the split is used.
+     *
+     * @param context the information about the task
+     * @return a new vertex writer
+     * @throws IOException
+     * @throws InterruptedException
+     */
+    public abstract VertexWriter<I, V, E> createVertexWriter(
+        TaskAttemptContext context) throws IOException, InterruptedException;
+
+    /**
+     * Check for validity of the output-specification for the job.
+     * (Copied from Hadoop OutputFormat)
+     *
+     * <p>This is to validate the output specification for the job when it is
+     * a job is submitted.  Typically checks that it does not already exist,
+     * throwing an exception when it already exists, so that output is not
+     * overwritten.</p>
+     *
+     * @param context information about the job
+     * @throws IOException when output should not be attempted
+     */
+    public abstract void checkOutputSpecs(JobContext context)
+        throws IOException, InterruptedException;
+
+    /**
+     * Get the output committer for this output format. This is responsible
+     * for ensuring the output is committed correctly.
+     * (Copied from Hadoop OutputFormat)
+     *
+     * @param context the task context
+     * @return an output committer
+     * @throws IOException
+     * @throws InterruptedException
+     */
+    public abstract OutputCommitter getOutputCommitter(
+        TaskAttemptContext context) throws IOException, InterruptedException;
+}
diff --git a/src/main/java/org/apache/giraph/graph/VertexReader.java b/src/main/java/org/apache/giraph/graph/VertexReader.java
new file mode 100644
index 0000000..d5a00e6
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/VertexReader.java
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+
+import java.io.IOException;
+
+@SuppressWarnings("rawtypes")
+public interface VertexReader<
+        I extends WritableComparable,
+        V extends Writable,
+        E extends Writable,
+        M extends Writable> {
+    /**
+     * Use the input split and context to setup reading the vertices.
+     * Guaranteed to be called prior to any other function.
+     *
+     * @param inputSplit
+     * @param context
+     * @throws IOException
+     * @throws InterruptedException
+     */
+    void initialize(InputSplit inputSplit, TaskAttemptContext context)
+        throws IOException, InterruptedException;
+
+    /**
+     *
+     * @return false iff there are no more vertices
+     * @throws IOException
+     * @throws InterruptedException
+     */
+    boolean nextVertex() throws IOException, InterruptedException;
+
+    /**
+     *
+     * @return the current vertex which has been read.  nextVertex() should be called first.
+     * @throws IOException
+     * @throws InterruptedException
+     */
+    BasicVertex<I, V, E, M> getCurrentVertex() throws IOException, InterruptedException;
+
+    /**
+     * Close this {@link VertexReader} to future operations.
+     *
+     * @throws IOException
+     */
+    void close() throws IOException;
+
+    /**
+     * How much of the input has the {@link VertexReader} consumed i.e.
+     * has been processed by?
+     *
+     * @return Progress from <code>0.0</code> to <code>1.0</code>.
+     * @throws IOException
+     * @throws InterruptedException
+     */
+    float getProgress() throws IOException, InterruptedException;
+}
diff --git a/src/main/java/org/apache/giraph/graph/VertexResolver.java b/src/main/java/org/apache/giraph/graph/VertexResolver.java
new file mode 100644
index 0000000..b971df2
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/VertexResolver.java
@@ -0,0 +1,140 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import com.google.common.collect.Iterables;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.log4j.Logger;
+
+import java.util.List;
+
+/**
+ * Default implementation of how to resolve vertex creation/removal, messages
+ * to nonexistent vertices, etc.
+ *
+ * @param <I>
+ * @param <V>
+ * @param <E>
+ * @param <M>
+ */
+@SuppressWarnings("rawtypes")
+public class VertexResolver<I extends WritableComparable, V extends Writable,
+        E extends Writable, M extends Writable>
+        implements BasicVertexResolver<I, V, E, M>, Configurable {
+    /** Configuration */
+    private Configuration conf = null;
+
+    private GraphState<I,V,E,M> graphState;
+
+    /** Class logger */
+    private static final Logger LOG = Logger.getLogger(VertexResolver.class);
+
+    @Override
+    public BasicVertex<I, V, E, M> resolve(
+            I vertexId,
+            BasicVertex<I, V, E, M> vertex,
+            VertexChanges<I, V, E, M> vertexChanges,
+            Iterable<M> messages) {
+        // Default algorithm:
+        // 1. If the vertex exists, first prune the edges
+        // 2. If vertex removal desired, remove the vertex.
+        // 3. If creation of vertex desired, pick first vertex
+        // 4. If vertex doesn't exist, but got messages, create
+        // 5. If edge addition, add the edges
+        if (vertex != null) {
+            if (vertexChanges != null) {
+                List<I> removedEdgeList = vertexChanges.getRemovedEdgeList();
+                for (I removedDestVertex : removedEdgeList) {
+                    E removeEdge =
+                        ((MutableVertex<I, V, E, M>) vertex).removeEdge(
+                            removedDestVertex);
+                    if (removeEdge == null) {
+                        LOG.warn("resolve: Failed to remove edge with " +
+                                 "destination " + removedDestVertex + "on " +
+                                 vertex + " since it doesn't exist.");
+                    }
+                }
+                if (vertexChanges.getRemovedVertexCount() > 0) {
+                    vertex = null;
+                }
+            }
+        }
+
+        if (vertex == null) {
+            if (vertexChanges != null) {
+                if (!vertexChanges.getAddedVertexList().isEmpty()) {
+                    vertex = vertexChanges.getAddedVertexList().get(0);
+                }
+            }
+            if (vertex == null && messages != null
+                    && !Iterables.isEmpty(messages)) {
+                vertex = instantiateVertex();
+                vertex.initialize(vertexId,
+                                  BspUtils.<V>createVertexValue(getConf()),
+                                  null,
+                                  messages);
+            }
+        } else {
+            if ((vertexChanges != null) &&
+                    (!vertexChanges.getAddedVertexList().isEmpty())) {
+                LOG.warn("resolve: Tried to add a vertex with id = " +
+                         vertex.getVertexId() + " when one already " +
+                        "exists.  Ignoring the add vertex request.");
+            }
+        }
+
+        if (vertexChanges != null &&
+                !vertexChanges.getAddedEdgeList().isEmpty()) {
+            MutableVertex<I, V, E, M> mutableVertex =
+                (MutableVertex<I, V, E, M>) vertex;
+            for (Edge<I, E> edge : vertexChanges.getAddedEdgeList()) {
+                edge.setConf(getConf());
+                mutableVertex.addEdge(edge.getDestVertexId(),
+                                      edge.getEdgeValue());
+            }
+        }
+
+        return vertex;
+    }
+
+    @Override
+    public BasicVertex<I, V, E, M> instantiateVertex() {
+        BasicVertex<I, V, E, M> vertex =
+            BspUtils.<I, V, E, M>createVertex(getConf());
+        vertex.setGraphState(graphState);
+        return vertex;
+    }
+
+    @Override
+    public Configuration getConf() {
+        return conf;
+    }
+
+    @Override
+    public void setConf(Configuration conf) {
+        this.conf = conf;
+    }
+
+    public void setGraphState(GraphState<I, V, E, M> graphState) {
+      this.graphState = graphState;
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/VertexWriter.java b/src/main/java/org/apache/giraph/graph/VertexWriter.java
new file mode 100644
index 0000000..8c30039
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/VertexWriter.java
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import java.io.IOException;
+
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+
+/**
+ * Implement to output a vertex range of the graph after the computation
+ *
+ * @param <I> Vertex id
+ * @param <V> Vertex value
+ * @param <E> Edge value
+ */
+@SuppressWarnings("rawtypes")
+public interface VertexWriter<
+        I extends WritableComparable,
+        V extends Writable,
+        E extends Writable> {
+    /**
+     * Use the context to setup writing the vertices.
+     * Guaranteed to be called prior to any other function.
+     *
+     * @param context
+     * @throws IOException
+     */
+    void initialize(TaskAttemptContext context) throws IOException;
+
+    /**
+     * Writes the next vertex and associated data
+     *
+     * @param vertex set the properties of this vertex
+     * @throws IOException
+     * @throws InterruptedException
+     */
+    void writeVertex(BasicVertex<I, V, E, ?> vertex)
+        throws IOException, InterruptedException;
+
+    /**
+     * Close this {@link VertexWriter} to future operations.
+     *
+     * @param context the context of the task
+     * @throws IOException
+     * @throws InterruptedException
+     */
+    void close(TaskAttemptContext context)
+        throws IOException, InterruptedException;
+}
diff --git a/src/main/java/org/apache/giraph/graph/WorkerContext.java b/src/main/java/org/apache/giraph/graph/WorkerContext.java
new file mode 100644
index 0000000..707c17b
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/WorkerContext.java
@@ -0,0 +1,127 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.mapreduce.Mapper;
+
+/**
+ * WorkerContext allows for the execution of user code
+ * on a per-worker basis. There's one WorkerContext per worker.
+ */
+@SuppressWarnings("rawtypes")
+public abstract class WorkerContext implements AggregatorUsage {
+    /** Global graph state */
+	private GraphState graphState;
+
+	public void setGraphState(GraphState graphState) {
+		this.graphState = graphState;
+	}
+
+    /**
+     * Initialize the WorkerContext.
+     * This method is executed once on each Worker before the first
+     * superstep starts.
+     *
+     * @throws IllegalAccessException
+     * @throws InstantiationException
+     */
+	public abstract void preApplication() throws InstantiationException,
+		IllegalAccessException;
+
+    /**
+     * Finalize the WorkerContext.
+     * This method is executed once on each Worker after the last
+     * superstep ends.
+     */
+    public abstract void postApplication();
+
+    /**
+     * Execute user code.
+     * This method is executed once on each Worker before each
+     * superstep starts.
+     */
+    public abstract void preSuperstep();
+
+    /**
+     * Execute user code.
+     * This method is executed once on each Worker after each
+     * superstep ends.
+     */
+    public abstract void postSuperstep();
+
+    /**
+     * Retrieves the current superstep.
+     *
+     * @return Current superstep
+     */
+    public long getSuperstep() {
+        return graphState.getSuperstep();
+    }
+
+    /**
+     * Get the total (all workers) number of vertices that
+     * existed in the previous superstep.
+     *
+     * @return Total number of vertices (-1 if first superstep)
+     */
+    public long getNumVertices() {
+    	return graphState.getNumVertices();
+    }
+
+    /**
+     * Get the total (all workers) number of edges that
+     * existed in the previous superstep.
+     *
+     * @return Total number of edges (-1 if first superstep)
+     */
+    public long getNumEdges() {
+    	return graphState.getNumEdges();
+    }
+
+    /**
+     * Get the mapper context
+     *
+     * @return Mapper context
+     */
+    public Mapper.Context getContext() {
+        return graphState.getContext();
+    }
+
+    @Override
+    public final <A extends Writable> Aggregator<A> registerAggregator(
+            String name,
+            Class<? extends Aggregator<A>> aggregatorClass)
+            throws InstantiationException, IllegalAccessException {
+        return graphState.getGraphMapper().getAggregatorUsage().
+            registerAggregator(name, aggregatorClass);
+    }
+
+    @Override
+    public final Aggregator<? extends Writable> getAggregator(String name) {
+        return graphState.getGraphMapper().getAggregatorUsage().
+            getAggregator(name);
+    }
+
+    @Override
+    public final boolean useAggregator(String name) {
+        return graphState.getGraphMapper().getAggregatorUsage().
+            useAggregator(name);
+    }
+}
\ No newline at end of file
diff --git a/src/main/java/org/apache/giraph/graph/WorkerInfo.java b/src/main/java/org/apache/giraph/graph/WorkerInfo.java
new file mode 100644
index 0000000..51f9313
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/WorkerInfo.java
@@ -0,0 +1,111 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.hadoop.io.Writable;
+
+/**
+ * Information about a worker that is sent to the master and other workers.
+ */
+public class WorkerInfo implements Writable {
+    /** Worker hostname */
+    private String hostname;
+    /** Partition id of this worker */
+    private int partitionId = -1;
+    /** Port that the RPC server is using */
+    private int port = -1;
+    /** Hostname + "_" + id for easier debugging */
+    private String hostnameId;
+
+    /**
+     * Constructor for reflection
+     */
+    public WorkerInfo() {
+    }
+
+    public WorkerInfo(String hostname, int partitionId, int port) {
+        this.hostname = hostname;
+        this.partitionId = partitionId;
+        this.port = port;
+        this.hostnameId = hostname + "_" + partitionId;
+    }
+
+    public String getHostname() {
+        return hostname;
+    }
+
+    public int getPartitionId() {
+        return partitionId;
+    }
+
+    public String getHostnameId() {
+        return hostnameId;
+    }
+
+    public int getPort() {
+        return port;
+    }
+
+    @Override
+    public boolean equals(Object other) {
+        if (other instanceof WorkerInfo) {
+            WorkerInfo workerInfo = (WorkerInfo) other;
+            if (hostname.equals(workerInfo.getHostname()) &&
+                    (partitionId == workerInfo.getPartitionId()) &&
+                    (port == workerInfo.getPort())) {
+                return true;
+            }
+        }
+        return false;
+    }
+
+    @Override
+    public int hashCode() {
+        int result = 17;
+        result = 37 * result + port;
+        result = 37 * result + hostname.hashCode();
+        result = 37 * result + partitionId;
+        return result;
+    }
+
+    @Override
+    public String toString() {
+        return "Worker(hostname=" + hostname + ", MRpartition=" +
+            partitionId + ", port=" + port + ")";
+    }
+
+    @Override
+    public void readFields(DataInput input) throws IOException {
+        hostname = input.readUTF();
+        partitionId = input.readInt();
+        port = input.readInt();
+        hostnameId = hostname + "_" + partitionId;
+    }
+
+    @Override
+    public void write(DataOutput output) throws IOException {
+        output.writeUTF(hostname);
+        output.writeInt(partitionId);
+        output.writeInt(port);
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/partition/BasicPartitionOwner.java b/src/main/java/org/apache/giraph/graph/partition/BasicPartitionOwner.java
new file mode 100644
index 0000000..e5e8588
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/partition/BasicPartitionOwner.java
@@ -0,0 +1,146 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph.partition;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.giraph.graph.WorkerInfo;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+
+/**
+ * Basic partition owner, can be subclassed for more complicated partition
+ * owner implementations.
+ */
+public class BasicPartitionOwner implements PartitionOwner, Configurable {
+    /** Configuration */
+    private Configuration conf;
+    /** Partition id */
+    private int partitionId = -1;
+    /** Owning worker information */
+    private WorkerInfo workerInfo;
+    /** Previous (if any) worker info */
+    private WorkerInfo previousWorkerInfo;
+    /** Checkpoint files prefix for this partition */
+    private String checkpointFilesPrefix;
+
+    public BasicPartitionOwner() {
+    }
+
+    public BasicPartitionOwner(int partitionId, WorkerInfo workerInfo) {
+        this(partitionId, workerInfo, null, null);
+    }
+
+    public BasicPartitionOwner(int partitionId,
+                               WorkerInfo workerInfo,
+                               WorkerInfo previousWorkerInfo,
+                               String checkpointFilesPrefix) {
+        this.partitionId = partitionId;
+        this.workerInfo = workerInfo;
+        this.previousWorkerInfo = previousWorkerInfo;
+        this.checkpointFilesPrefix = checkpointFilesPrefix;
+    }
+
+    @Override
+    public int getPartitionId() {
+        return partitionId;
+    }
+
+    @Override
+    public WorkerInfo getWorkerInfo() {
+        return workerInfo;
+    }
+
+    @Override
+    public void setWorkerInfo(WorkerInfo workerInfo) {
+        this.workerInfo = workerInfo;
+    }
+
+    @Override
+    public WorkerInfo getPreviousWorkerInfo() {
+        return previousWorkerInfo;
+    }
+
+    @Override
+    public void setPreviousWorkerInfo(WorkerInfo workerInfo) {
+        this.previousWorkerInfo = workerInfo;
+    }
+
+    @Override
+    public String getCheckpointFilesPrefix() {
+        return checkpointFilesPrefix;
+    }
+
+    @Override
+    public void setCheckpointFilesPrefix(String checkpointFilesPrefix) {
+        this.checkpointFilesPrefix = checkpointFilesPrefix;
+    }
+
+    @Override
+    public void readFields(DataInput input) throws IOException {
+        partitionId = input.readInt();
+        workerInfo = new WorkerInfo();
+        workerInfo.readFields(input);
+        boolean hasPreviousWorkerInfo = input.readBoolean();
+        if (hasPreviousWorkerInfo) {
+            previousWorkerInfo = new WorkerInfo();
+            previousWorkerInfo.readFields(input);
+        }
+        boolean hasCheckpointFilePrefix = input.readBoolean();
+        if (hasCheckpointFilePrefix) {
+            checkpointFilesPrefix = input.readUTF();
+        }
+    }
+
+    @Override
+    public void write(DataOutput output) throws IOException {
+        output.writeInt(partitionId);
+        workerInfo.write(output);
+        if (previousWorkerInfo != null) {
+            output.writeBoolean(true);
+            previousWorkerInfo.write(output);
+        } else {
+            output.writeBoolean(false);
+        }
+        if (checkpointFilesPrefix != null) {
+            output.writeBoolean(true);
+            output.writeUTF(checkpointFilesPrefix);
+        } else {
+            output.writeBoolean(false);
+        }
+    }
+
+    @Override
+    public Configuration getConf() {
+        return conf;
+    }
+
+    @Override
+    public void setConf(Configuration conf) {
+        this.conf = conf;
+    }
+
+    @Override
+    public String toString() {
+        return "(id=" + partitionId + ",cur=" + workerInfo + ",prev=" +
+               previousWorkerInfo + ",ckpt_file=" + checkpointFilesPrefix + ")";
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/partition/GraphPartitionerFactory.java b/src/main/java/org/apache/giraph/graph/partition/GraphPartitionerFactory.java
new file mode 100644
index 0000000..0e98bed
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/partition/GraphPartitionerFactory.java
@@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph.partition;
+
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * Defines the partitioning framework for this application.
+ *
+ * @param <I> Vertex index value
+ * @param <V> Vertex value
+ * @param <E> Edge value
+ * @param <M> Message value
+ */
+@SuppressWarnings("rawtypes")
+public interface GraphPartitionerFactory<I extends WritableComparable,
+        V extends Writable, E extends Writable, M extends Writable> {
+    /**
+     * Create the {@link MasterGraphPartitioner} used by the master.
+     * Instantiated once by the master and reused.
+     *
+     * @return Instantiated master graph partitioner
+     */
+    MasterGraphPartitioner<I, V, E, M> createMasterGraphPartitioner();
+
+    /**
+     * Create the {@link WorkerGraphPartitioner} used by the worker.
+     * Instantiated once by every worker and reused.
+     *
+     * @return Instantiated worker graph partitioner
+     */
+    WorkerGraphPartitioner<I, V, E, M> createWorkerGraphPartitioner();
+}
diff --git a/src/main/java/org/apache/giraph/graph/partition/HashMasterPartitioner.java b/src/main/java/org/apache/giraph/graph/partition/HashMasterPartitioner.java
new file mode 100644
index 0000000..6e940f1
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/partition/HashMasterPartitioner.java
@@ -0,0 +1,157 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph.partition;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+
+import org.apache.giraph.graph.WorkerInfo;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.log4j.Logger;
+
+/**
+ * Master will execute a hash based partitioning.
+ *
+ * @param <I> Vertex index value
+ * @param <V> Vertex value
+ * @param <E> Edge value
+ * @param <M> Message value
+ */
+@SuppressWarnings("rawtypes")
+public class HashMasterPartitioner<I extends WritableComparable,
+        V extends Writable, E extends Writable, M extends Writable> implements
+        MasterGraphPartitioner<I, V, E, M> {
+    /** Provided configuration */
+    private Configuration conf;
+    /** Specified partition count (overrides calculation) */
+    private final int userPartitionCount;
+    /** Partition count (calculated in createInitialPartitionOwners) */
+    private int partitionCount = -1;
+    /** Save the last generated partition owner list */
+    private List<PartitionOwner> partitionOwnerList;
+    /** Class logger */
+    private static Logger LOG = Logger.getLogger(HashMasterPartitioner.class);
+
+    /**
+     * ZooKeeper has a limit of the data in a single znode of 1 MB and
+     * each entry can go be on the average somewhat more than 300 bytes
+     */
+    private static final int MAX_PARTTIONS = 1024 * 1024 / 350;
+
+    /**
+     * Multiplier for the current workers squared
+     */
+    public static final String PARTITION_COUNT_MULTIPLIER =
+        "hash.masterPartitionCountMultipler";
+    public static final float DEFAULT_PARTITION_COUNT_MULTIPLIER = 1.0f;
+
+    /** Overrides default partition count calculation if not -1 */
+    public static final String USER_PARTITION_COUNT =
+        "hash.userPartitionCount";
+    public static final int DEFAULT_USER_PARTITION_COUNT = -1;
+
+    public HashMasterPartitioner(Configuration conf) {
+        this.conf = conf;
+        userPartitionCount = conf.getInt(USER_PARTITION_COUNT,
+                                         DEFAULT_USER_PARTITION_COUNT);
+    }
+
+    @Override
+    public Collection<PartitionOwner> createInitialPartitionOwners(
+            Collection<WorkerInfo> availableWorkerInfos, int maxWorkers) {
+        if (availableWorkerInfos.isEmpty()) {
+            throw new IllegalArgumentException(
+                "createInitialPartitionOwners: No available workers");
+        }
+        List<PartitionOwner> ownerList = new ArrayList<PartitionOwner>();
+        Iterator<WorkerInfo> workerIt = availableWorkerInfos.iterator();
+        if (userPartitionCount == DEFAULT_USER_PARTITION_COUNT) {
+            float multiplier = conf.getFloat(
+                PARTITION_COUNT_MULTIPLIER,
+                DEFAULT_PARTITION_COUNT_MULTIPLIER);
+            partitionCount =
+                Math.max((int) (multiplier * availableWorkerInfos.size() *
+                         availableWorkerInfos.size()),
+                         1);
+        } else {
+            partitionCount = userPartitionCount;
+        }
+        if (LOG.isInfoEnabled()) {
+            LOG.info("createInitialPartitionOwners: Creating " +
+                     partitionCount + ", default would have been " +
+                     (availableWorkerInfos.size() *
+                      availableWorkerInfos.size()) + " partitions.");
+        }
+        if (partitionCount > MAX_PARTTIONS) {
+            LOG.warn("createInitialPartitionOwners: " +
+                    "Reducing the partitionCount to " + MAX_PARTTIONS +
+                    " from " + partitionCount);
+            partitionCount = MAX_PARTTIONS;
+        }
+
+        for (int i = 0; i < partitionCount; ++i) {
+            PartitionOwner owner = new BasicPartitionOwner(i, workerIt.next());
+            if (!workerIt.hasNext()) {
+                workerIt = availableWorkerInfos.iterator();
+            }
+            ownerList.add(owner);
+        }
+        this.partitionOwnerList = ownerList;
+        return ownerList;
+    }
+
+
+    @Override
+    public Collection<PartitionOwner> getCurrentPartitionOwners() {
+        return partitionOwnerList;
+    }
+
+    /**
+     * Subclasses can set the partition owner list.
+     *
+     * @param partitionOwnerList New partition owner list.
+     */
+    protected void setPartitionOwnerList(List<PartitionOwner>
+            partitionOwnerList) {
+        this.partitionOwnerList = partitionOwnerList;
+    }
+
+    @Override
+    public Collection<PartitionOwner> generateChangedPartitionOwners(
+            Collection<PartitionStats> allPartitionStatsList,
+            Collection<WorkerInfo> availableWorkerInfos,
+            int maxWorkers,
+            long superstep) {
+        return PartitionBalancer.balancePartitionsAcrossWorkers(
+            conf,
+            partitionOwnerList,
+            allPartitionStatsList,
+            availableWorkerInfos);
+    }
+
+    @Override
+    public PartitionStats createPartitionStats() {
+        return new PartitionStats();
+    }
+
+}
diff --git a/src/main/java/org/apache/giraph/graph/partition/HashPartitionerFactory.java b/src/main/java/org/apache/giraph/graph/partition/HashPartitionerFactory.java
new file mode 100644
index 0000000..87cbb67
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/partition/HashPartitionerFactory.java
@@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph.partition;
+
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * Divides the vertices into partitions by their hash code using a simple
+ * round-robin hash for great balancing if given a random hash code.
+ *
+ * @param <I> Vertex index value
+ * @param <V> Vertex value
+ * @param <E> Edge value
+ * @param <M> Message value
+ */
+@SuppressWarnings("rawtypes")
+public class HashPartitionerFactory<I extends WritableComparable,
+        V extends Writable, E extends Writable, M extends Writable>
+        implements Configurable,
+        GraphPartitionerFactory<I, V, E, M> {
+    private Configuration conf;
+
+    @Override
+    public MasterGraphPartitioner<I, V, E, M> createMasterGraphPartitioner() {
+        return new HashMasterPartitioner<I, V, E, M>(getConf());
+    }
+
+    @Override
+    public WorkerGraphPartitioner<I, V, E, M> createWorkerGraphPartitioner() {
+        return new HashWorkerPartitioner<I, V, E, M>();
+    }
+
+    @Override
+    public Configuration getConf() {
+        return conf;
+    }
+
+    @Override
+    public void setConf(Configuration conf) {
+        this.conf = conf;
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/partition/HashRangePartitionerFactory.java b/src/main/java/org/apache/giraph/graph/partition/HashRangePartitionerFactory.java
new file mode 100644
index 0000000..2646100
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/partition/HashRangePartitionerFactory.java
@@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph.partition;
+
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * Divides the vertices into partitions by their hash code using ranges of the
+ * hash space.
+ *
+ * @param <I> Vertex index value
+ * @param <V> Vertex value
+ * @param <E> Edge value
+ * @param <M> Message value
+ */
+@SuppressWarnings("rawtypes")
+public class HashRangePartitionerFactory<I extends WritableComparable,
+        V extends Writable, E extends Writable, M extends Writable>
+        implements Configurable, GraphPartitionerFactory<I, V, E, M> {
+    private Configuration conf;
+
+    @Override
+    public MasterGraphPartitioner<I, V, E, M> createMasterGraphPartitioner() {
+        return new HashMasterPartitioner<I, V, E, M>(getConf());
+    }
+
+    @Override
+    public WorkerGraphPartitioner<I, V, E, M> createWorkerGraphPartitioner() {
+        return new HashRangeWorkerPartitioner<I, V, E, M>();
+    }
+
+    @Override
+    public Configuration getConf() {
+        return conf;
+    }
+
+    @Override
+    public void setConf(Configuration conf) {
+        this.conf = conf;
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/partition/HashRangeWorkerPartitioner.java b/src/main/java/org/apache/giraph/graph/partition/HashRangeWorkerPartitioner.java
new file mode 100644
index 0000000..e64d793
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/partition/HashRangeWorkerPartitioner.java
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph.partition;
+
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * Implements range-based partitioning from the id hash code.
+ *
+ * @param <I> Vertex index value
+ * @param <V> Vertex value
+ * @param <E> Edge value
+ * @param <M> Message value
+ */
+@SuppressWarnings("rawtypes")
+public class HashRangeWorkerPartitioner<I extends WritableComparable,
+        V extends Writable, E extends Writable, M extends Writable>
+        extends HashWorkerPartitioner<I, V, E, M> {
+    @Override
+    public PartitionOwner getPartitionOwner(I vertexId) {
+        int rangeSize = Integer.MAX_VALUE / getPartitionOwners().size();
+        int index = Math.abs(vertexId.hashCode()) / rangeSize;
+        return partitionOwnerList.get(index);
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/partition/HashWorkerPartitioner.java b/src/main/java/org/apache/giraph/graph/partition/HashWorkerPartitioner.java
new file mode 100644
index 0000000..f518ae5
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/partition/HashWorkerPartitioner.java
@@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph.partition;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.giraph.graph.WorkerInfo;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * Implements hash-based partitioning from the id hash code.
+ *
+ * @param <I> Vertex index value
+ * @param <V> Vertex value
+ * @param <E> Edge value
+ * @param <M> Message value
+ */
+@SuppressWarnings("rawtypes")
+public class HashWorkerPartitioner<I extends WritableComparable,
+        V extends Writable, E extends Writable, M extends Writable>
+        implements WorkerGraphPartitioner<I, V, E, M> {
+    /** Mapping of the vertex ids to {@link PartitionOwner} */
+    protected List<PartitionOwner> partitionOwnerList =
+        new ArrayList<PartitionOwner>();
+
+    @Override
+    public PartitionOwner createPartitionOwner() {
+        return new BasicPartitionOwner();
+    }
+
+    @Override
+    public PartitionOwner getPartitionOwner(I vertexId) {
+        return partitionOwnerList.get(Math.abs(vertexId.hashCode())
+                % partitionOwnerList.size());
+    }
+
+    @Override
+    public Collection<PartitionStats> finalizePartitionStats(
+            Collection<PartitionStats> workerPartitionStats,
+            Map<Integer, Partition<I, V, E, M>> partitionMap) {
+        // No modification necessary
+        return workerPartitionStats;
+    }
+
+    @Override
+    public PartitionExchange updatePartitionOwners(
+            WorkerInfo myWorkerInfo,
+            Collection<? extends PartitionOwner> masterSetPartitionOwners,
+            Map<Integer, Partition<I, V, E, M>> partitionMap) {
+        partitionOwnerList.clear();
+        partitionOwnerList.addAll(masterSetPartitionOwners);
+
+        Set<WorkerInfo> dependentWorkerSet = new HashSet<WorkerInfo>();
+        Map<WorkerInfo, List<Integer>> workerPartitionOwnerMap =
+            new HashMap<WorkerInfo, List<Integer>>();
+        for (PartitionOwner partitionOwner : masterSetPartitionOwners) {
+            if (partitionOwner.getPreviousWorkerInfo() == null) {
+                continue;
+            } else if (partitionOwner.getWorkerInfo().equals(
+                       myWorkerInfo) &&
+                       partitionOwner.getPreviousWorkerInfo().equals(
+                       myWorkerInfo)) {
+                throw new IllegalStateException(
+                    "updatePartitionOwners: Impossible to have the same " +
+                    "previous and current worker info " + partitionOwner +
+                    " as me " + myWorkerInfo);
+            } else if (partitionOwner.getWorkerInfo().equals(myWorkerInfo)) {
+                dependentWorkerSet.add(partitionOwner.getPreviousWorkerInfo());
+            } else if (partitionOwner.getPreviousWorkerInfo().equals(
+                    myWorkerInfo)) {
+                if (workerPartitionOwnerMap.containsKey(
+                        partitionOwner.getWorkerInfo())) {
+                    workerPartitionOwnerMap.get(
+                        partitionOwner.getWorkerInfo()).add(
+                            partitionOwner.getPartitionId());
+                } else {
+                    List<Integer> partitionOwnerList = new ArrayList<Integer>();
+                    partitionOwnerList.add(partitionOwner.getPartitionId());
+                    workerPartitionOwnerMap.put(partitionOwner.getWorkerInfo(),
+                                                partitionOwnerList);
+                }
+            }
+        }
+
+        return new PartitionExchange(dependentWorkerSet,
+                                     workerPartitionOwnerMap);
+    }
+
+    @Override
+    public Collection<? extends PartitionOwner> getPartitionOwners() {
+        return partitionOwnerList;
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/partition/MasterGraphPartitioner.java b/src/main/java/org/apache/giraph/graph/partition/MasterGraphPartitioner.java
new file mode 100644
index 0000000..704ee4e
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/partition/MasterGraphPartitioner.java
@@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph.partition;
+
+import java.util.Collection;
+
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.giraph.graph.WorkerInfo;
+
+/**
+ * Determines how to divide the graph into partitions, how to manipulate
+ * partitions and then how to assign those partitions to workers.
+ *
+ * @param <I> Vertex index value
+ * @param <V> Vertex value
+ * @param <E> Edge value
+ * @param <M> Message value
+ */
+@SuppressWarnings("rawtypes")
+public interface MasterGraphPartitioner<I extends WritableComparable,
+        V extends Writable, E extends Writable, M extends Writable> {
+    /**
+     * Set some initial partition owners for the graph. Guaranteed to be called
+     * prior to the graph being loaded (initial or restart).
+     *
+     * @param availableWorkerInfos Workers available for partition assignment
+     * @param maxWorkers Maximum number of workers
+     */
+    Collection<PartitionOwner> createInitialPartitionOwners(
+            Collection<WorkerInfo> availableWorkerInfos, int maxWorkers);
+
+    /**
+     * After the worker stats have been merged to a single list, the master can
+     * use this information to send commands to the workers for any
+     * {@link Partition} changes. This protocol is specific to the
+     * {@link GraphPartitioner} implementation.
+     *
+     * @param allPartitionStatsList All partition stats from all workers.
+     * @param availableWorkerInfos Workers available for partition assignment
+     * @param maxWorkers Maximum number of workers
+     * @param superstep Partition owners will be set for this superstep
+     * @return Collection of {@link PartitionOwner} objects that changed from
+     *         the previous superstep, empty list if no change.
+     */
+    Collection<PartitionOwner> generateChangedPartitionOwners(
+            Collection<PartitionStats> allPartitionStatsList,
+            Collection<WorkerInfo> availableWorkers,
+            int maxWorkers,
+            long superstep);
+
+    /**
+     * Get current partition owners at this time.
+     *
+     * @return Collection of current {@link PartitionOwner} objects
+     */
+    Collection<PartitionOwner> getCurrentPartitionOwners();
+
+    /**
+     * Instantiate the {@link PartitionStats} implementation used to read the
+     * worker stats
+     *
+     * @return Instantiated {@link PartitionStats} object
+     */
+    PartitionStats createPartitionStats();
+}
diff --git a/src/main/java/org/apache/giraph/graph/partition/Partition.java b/src/main/java/org/apache/giraph/graph/partition/Partition.java
new file mode 100644
index 0000000..5173cf3
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/partition/Partition.java
@@ -0,0 +1,145 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph.partition;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.giraph.graph.BasicVertex;
+import org.apache.giraph.graph.BspUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+
+
+/**
+ * A generic container that stores vertices.  Vertex ids will map to exactly
+ * one partition.
+ */
+@SuppressWarnings("rawtypes")
+public class Partition<I extends WritableComparable,
+        V extends Writable, E extends Writable, M extends Writable>
+        implements Writable {
+    /** Configuration from the worker */
+    private final Configuration conf;
+    /** Partition id */
+    private final int partitionId;
+    /** Vertex map for this range (keyed by index) */
+    private final Map<I, BasicVertex<I, V, E, M>> vertexMap =
+        new HashMap<I, BasicVertex<I, V, E, M>>();
+
+    public Partition(Configuration conf, int partitionId) {
+        this.conf = conf;
+        this.partitionId = partitionId;
+    }
+
+    /**
+     * Get the vertex for this vertex index.
+     *
+     * @param vertexIndex Vertex index to search for
+     * @return Vertex if it exists, null otherwise
+     */
+    public BasicVertex<I, V, E, M> getVertex(I vertexIndex) {
+        return vertexMap.get(vertexIndex);
+    }
+
+    /**
+     * Put a vertex into the Partition
+     *
+     * @param vertex Vertex to put in the Partition
+     * @return old vertex value (i.e. null if none existed prior)
+     */
+    public BasicVertex<I, V, E, M> putVertex(BasicVertex<I, V, E, M> vertex) {
+        return vertexMap.put(vertex.getVertexId(), vertex);
+    }
+
+    /**
+     * Remove a vertex from the Partition
+     *
+     * @param vertexIndex Vertex index to remove
+     */
+    public BasicVertex<I, V, E, M> removeVertex(I vertexIndex) {
+        return vertexMap.remove(vertexIndex);
+    }
+
+    /**
+     * Get a collection of the vertices.
+     *
+     * @return Collection of the vertices
+     */
+    public Collection<BasicVertex<I, V, E , M>> getVertices() {
+        return vertexMap.values();
+    }
+
+    /**
+     * Get the number of edges in this partition.  Computed on the fly.
+     *
+     * @return Number of edges.
+     */
+    public long getEdgeCount() {
+        long edges = 0;
+        for (BasicVertex<I, V, E, M> vertex : vertexMap.values()) {
+            edges += vertex.getNumOutEdges();
+        }
+        return edges;
+    }
+
+    /**
+     * Get the partition id.
+     *
+     * @return Partition id of this partition.
+     */
+    public int getPartitionId() {
+        return partitionId;
+    }
+
+    @Override
+    public String toString() {
+        return "(id=" + getPartitionId() + ",V=" + vertexMap.size() +
+            ",E=" + getEdgeCount() + ")";
+    }
+
+    @Override
+    public void readFields(DataInput input) throws IOException {
+        int vertices = input.readInt();
+        for (int i = 0; i < vertices; ++i) {
+            BasicVertex<I, V, E, M> vertex =
+                BspUtils.<I, V, E, M>createVertex(conf);
+            vertex.readFields(input);
+            if (vertexMap.put(vertex.getVertexId(),
+                              (BasicVertex<I, V, E, M>) vertex) != null) {
+                throw new IllegalStateException(
+                    "readFields: " + this +
+                    " already has same id " + vertex);
+            }
+        }
+    }
+
+    @Override
+    public void write(DataOutput output) throws IOException {
+        output.writeInt(vertexMap.size());
+        for (BasicVertex vertex : vertexMap.values()) {
+            vertex.write(output);
+        }
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/partition/PartitionBalancer.java b/src/main/java/org/apache/giraph/graph/partition/PartitionBalancer.java
new file mode 100644
index 0000000..eb93f0a
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/partition/PartitionBalancer.java
@@ -0,0 +1,268 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph.partition;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.PriorityQueue;
+
+import org.apache.giraph.graph.WorkerInfo;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.log4j.Logger;
+
+/**
+ * Helper class for balancing partitions across a set of workers.
+ */
+public class PartitionBalancer {
+    /** Partition balancing algorithm */
+    public static final String PARTITION_BALANCE_ALGORITHM =
+        "hash.partitionBalanceAlgorithm";
+    public static final String STATIC_BALANCE_ALGORITHM =
+        "static";
+    public static final String EGDE_BALANCE_ALGORITHM =
+        "edges";
+    public static final String VERTICES_BALANCE_ALGORITHM =
+        "vertices";
+    /** Class logger */
+    private static Logger LOG = Logger.getLogger(PartitionBalancer.class);
+
+    /**
+     * What value to balance partitions with?  Edges, vertices?
+     */
+    private enum BalanceValue {
+        UNSET,
+        EDGES,
+        VERTICES
+    }
+
+    /**
+     * Get the value used to balance.
+     *
+     * @param partitionStat
+     * @param balanceValue
+     * @return
+     */
+    private static long getBalanceValue(PartitionStats partitionStat,
+                                        BalanceValue balanceValue) {
+        switch (balanceValue) {
+            case EDGES:
+                return partitionStat.getEdgeCount();
+            case VERTICES:
+                return partitionStat.getVertexCount();
+            default:
+                throw new IllegalArgumentException(
+                    "getBalanceValue: Illegal balance value " + balanceValue);
+        }
+    }
+
+    /**
+     * Used to sort the partition owners from lowest value to highest value
+     */
+    private static class PartitionOwnerComparator implements
+            Comparator<PartitionOwner> {
+        /** Map of owner to stats */
+        private final Map<PartitionOwner, PartitionStats> ownerStatMap;
+        /** Value type to compare on */
+        private final BalanceValue balanceValue;
+
+
+        /**
+         * Only constructor.
+         *
+         * @param comparatorValue What to compare with?
+         */
+        public PartitionOwnerComparator(
+                Map<PartitionOwner, PartitionStats> ownerStatMap,
+                BalanceValue balanceValue) {
+            this.ownerStatMap = ownerStatMap;
+            this.balanceValue = balanceValue;
+        }
+
+        @Override
+        public int compare(PartitionOwner owner1, PartitionOwner owner2) {
+            return (int)
+                (getBalanceValue(ownerStatMap.get(owner1), balanceValue) -
+                 getBalanceValue(ownerStatMap.get(owner2), balanceValue));
+        }
+    }
+
+    /**
+     * Structure to keep track of how much value a {@link WorkerInfo} has
+     * been assigned.
+     */
+    private static class WorkerInfoAssignments implements
+            Comparable<WorkerInfoAssignments> {
+        /** Worker info associated */
+        private final WorkerInfo workerInfo;
+        /** Balance value */
+        private final BalanceValue balanceValue;
+        /** Map of owner to stats */
+        private final Map<PartitionOwner, PartitionStats> ownerStatsMap;
+        /** Current value of this object */
+        private long value = 0;
+
+        public WorkerInfoAssignments(
+                WorkerInfo workerInfo,
+                BalanceValue balanceValue,
+                Map<PartitionOwner, PartitionStats> ownerStatsMap) {
+            this.workerInfo = workerInfo;
+            this.balanceValue = balanceValue;
+            this.ownerStatsMap = ownerStatsMap;
+        }
+
+        /**
+         * Get the total value of all partitions assigned to this worker.
+         *
+         * @return Total value of all partition assignments.
+         */
+        public long getValue() {
+            return value;
+        }
+
+        /**
+         * Assign a {@link PartitionOwner} to this {@link WorkerInfo}.
+         *
+         * @param partitionOwner PartitionOwner to assign.
+         */
+        public void assignPartitionOwner(
+                PartitionOwner partitionOwner) {
+            value += getBalanceValue(ownerStatsMap.get(partitionOwner),
+                                     balanceValue);
+            if (!partitionOwner.getWorkerInfo().equals(workerInfo)) {
+                partitionOwner.setPreviousWorkerInfo(
+                    partitionOwner.getWorkerInfo());
+                partitionOwner.setWorkerInfo(workerInfo);
+            } else {
+                partitionOwner.setPreviousWorkerInfo(null);
+            }
+        }
+
+        @Override
+        public int compareTo(WorkerInfoAssignments other) {
+            return (int)
+                (getValue() - ((WorkerInfoAssignments) other).getValue());
+        }
+    }
+
+    /**
+     * Balance the partitions with an algorithm based on a value.
+     *
+     * @param conf Configuration to find the algorithm
+     * @param allPartitionStatsList All the partition stats
+     * @param availableWorkerInfos All the available workers
+     * @return Balanced partition owners
+     */
+    public static Collection<PartitionOwner> balancePartitionsAcrossWorkers(
+        Configuration conf,
+        Collection<PartitionOwner> partitionOwners,
+        Collection<PartitionStats> allPartitionStats,
+        Collection<WorkerInfo> availableWorkerInfos) {
+
+        String balanceAlgorithm =
+            conf.get(PARTITION_BALANCE_ALGORITHM, STATIC_BALANCE_ALGORITHM);
+        if (LOG.isInfoEnabled()) {
+            LOG.info("balancePartitionsAcrossWorkers: Using algorithm " +
+                     balanceAlgorithm);
+        }
+        BalanceValue balanceValue = BalanceValue.UNSET;
+        if (balanceAlgorithm.equals(STATIC_BALANCE_ALGORITHM)) {
+            return partitionOwners;
+        } else if (balanceAlgorithm.equals(EGDE_BALANCE_ALGORITHM)) {
+            balanceValue = BalanceValue.EDGES;
+        } else if (balanceAlgorithm.equals(VERTICES_BALANCE_ALGORITHM)) {
+            balanceValue = BalanceValue.VERTICES;
+        } else {
+            throw new IllegalArgumentException(
+                "balancePartitionsAcrossWorkers: Illegal balance " +
+                "algorithm - " + balanceAlgorithm);
+        }
+
+        // Join the partition stats and partition owners by partition id
+        Map<Integer, PartitionStats> idStatMap =
+            new HashMap<Integer, PartitionStats>();
+        for (PartitionStats partitionStats : allPartitionStats) {
+            if (idStatMap.put(partitionStats.getPartitionId(), partitionStats)
+                    != null) {
+                throw new IllegalStateException(
+                    "balancePartitionsAcrossWorkers: Duplicate partition id " +
+                    "for " + partitionStats);
+            }
+        }
+        Map<PartitionOwner, PartitionStats> ownerStatsMap =
+            new HashMap<PartitionOwner, PartitionStats>();
+        for (PartitionOwner partitionOwner : partitionOwners) {
+            PartitionStats partitionStats =
+                idStatMap.get(partitionOwner.getPartitionId());
+            if (partitionStats == null) {
+                throw new IllegalStateException(
+                    "balancePartitionsAcrossWorkers: Missing partition " +
+                    "stats for " + partitionOwner);
+            }
+            if (ownerStatsMap.put(partitionOwner, partitionStats) != null) {
+                throw new IllegalStateException(
+                    "balancePartitionsAcrossWorkers: Duplicate partition " +
+                    "owner " + partitionOwner);
+            }
+        }
+        if (ownerStatsMap.size() != partitionOwners.size()) {
+            throw new IllegalStateException(
+                "balancePartitionsAcrossWorkers: ownerStats count = " +
+                ownerStatsMap.size() + ", partitionOwners count = " +
+                partitionOwners.size() + " and should match.");
+        }
+
+        List<WorkerInfoAssignments> workerInfoAssignmentsList =
+            new ArrayList<WorkerInfoAssignments>(availableWorkerInfos.size());
+        for (WorkerInfo workerInfo : availableWorkerInfos) {
+            workerInfoAssignmentsList.add(
+                new WorkerInfoAssignments(
+                    workerInfo, balanceValue, ownerStatsMap));
+        }
+
+        // A simple heuristic for balancing the partitions across the workers
+        // using a value (edges, vertices).  An improvement would be to
+        // take into account the already existing partition worker assignments.
+        // 1.  Sort the partitions by size
+        // 2.  Place the workers in a min heap sorted by their total balance
+        //     value.
+        // 3.  From largest partition to the smallest, take the partition
+        //     worker at the top of the heap, add the partition to it, and
+        //     then put it back in the heap
+        List<PartitionOwner> partitionOwnerList =
+            new ArrayList<PartitionOwner>(partitionOwners);
+        Collections.sort(partitionOwnerList,
+            Collections.reverseOrder(
+                new PartitionOwnerComparator(ownerStatsMap, balanceValue)));
+        PriorityQueue<WorkerInfoAssignments> minQueue =
+            new PriorityQueue<WorkerInfoAssignments>(workerInfoAssignmentsList);
+        for (PartitionOwner partitionOwner : partitionOwnerList) {
+            WorkerInfoAssignments chosenWorker = minQueue.remove();
+            chosenWorker.assignPartitionOwner(partitionOwner);
+            minQueue.add(chosenWorker);
+        }
+
+        return partitionOwnerList;
+    }
+}
+
diff --git a/src/main/java/org/apache/giraph/graph/partition/PartitionExchange.java b/src/main/java/org/apache/giraph/graph/partition/PartitionExchange.java
new file mode 100644
index 0000000..107e3af
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/partition/PartitionExchange.java
@@ -0,0 +1,77 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph.partition;
+
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.giraph.graph.WorkerInfo;
+
+/**
+ * Describes what is required to send and wait for in a potential partition
+ * exchange between workers.
+ */
+public class PartitionExchange {
+    /** Workers that I am dependent on before I can continue */
+    private final Set<WorkerInfo> myDependencyWorkerSet;
+    /** Workers that I need to sent partitions to */
+    private final Map<WorkerInfo, List<Integer>> sendWorkerPartitionMap;
+
+    /**
+     * Only constructor.
+     *
+     * @param myDependencyWorkerSet All the workers I must wait for
+     * @param sendWorkerPartitionMap Partitions I need to send to other workers
+     */
+    public PartitionExchange(
+            Set<WorkerInfo> myDependencyWorkerSet,
+            Map<WorkerInfo, List<Integer>> sendWorkerPartitionMap) {
+        this.myDependencyWorkerSet = myDependencyWorkerSet;
+        this.sendWorkerPartitionMap = sendWorkerPartitionMap;
+    }
+
+    /**
+     * Get the workers that I must wait for
+     *
+     * @return Set of workers I must wait for
+     */
+    public Set<WorkerInfo> getMyDependencyWorkerSet() {
+        return myDependencyWorkerSet;
+    }
+
+    /**
+     * Get a mapping of worker to list of partition ids I need to send to.
+     *
+     * @return Mapping of worker to partition id list I will send to.
+     */
+    public Map<WorkerInfo, List<Integer>> getSendWorkerPartitionMap() {
+        return sendWorkerPartitionMap;
+    }
+
+    /**
+     * Is this worker involved in a partition exchange?  Receiving or sending?
+     *
+     * @return True if needs to be involved in the exchange, false otherwise.
+     */
+    public boolean doExchange() {
+        return !myDependencyWorkerSet.isEmpty() ||
+               !sendWorkerPartitionMap.isEmpty();
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/partition/PartitionOwner.java b/src/main/java/org/apache/giraph/graph/partition/PartitionOwner.java
new file mode 100644
index 0000000..b7a569a
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/partition/PartitionOwner.java
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph.partition;
+
+import org.apache.giraph.graph.WorkerInfo;
+import org.apache.hadoop.io.Writable;
+
+/**
+ * Metadata about ownership of a partition.
+ */
+public interface PartitionOwner extends Writable {
+    /**
+     * Get the partition id that maps to the relevant {@link Partition} object
+     *
+     * @return Partition id
+     */
+    int getPartitionId();
+
+    /**
+     * Get the worker information that is currently responsible for
+     * the partition id.
+     *
+     * @return Owning worker information.
+     */
+    WorkerInfo getWorkerInfo();
+
+    /**
+     * Set the current worker info.
+     *
+     * @param workerInfo Worker info responsible for partition
+     */
+    void setWorkerInfo(WorkerInfo workerInfo);
+
+    /**
+     * Get the worker information that was previously responsible for the
+     * partition id.
+     *
+     * @return Owning worker information or null if no previous worker info.
+     */
+    WorkerInfo getPreviousWorkerInfo();
+
+    /**
+     * Set the previous worker info.
+     *
+     * @param workerInfo Worker info that was previously responsible for the
+     *        partition.
+     */
+    void setPreviousWorkerInfo(WorkerInfo workerInfo);
+
+    /**
+     * If this is a restarted checkpoint, the worker will use this information
+     * to determine where the checkpointed partition was stored on HDFS.
+     *
+     * @return Prefix of the checkpoint HDFS files for this partition, null if
+     *         this is not a restarted superstep.
+     */
+    String getCheckpointFilesPrefix();
+
+    /**
+     * Set the checkpoint files prefix.  Master uses this.
+     *
+     * @param checkpointFilesPrefix HDFS checkpoint file prefix
+     */
+    void setCheckpointFilesPrefix(String checkpointFilesPrefix);
+}
diff --git a/src/main/java/org/apache/giraph/graph/partition/PartitionStats.java b/src/main/java/org/apache/giraph/graph/partition/PartitionStats.java
new file mode 100644
index 0000000..040687c
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/partition/PartitionStats.java
@@ -0,0 +1,102 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph.partition;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.hadoop.io.Writable;
+
+/**
+ * Used to keep track of statistics of every {@link Partition}. Contains no
+ * actual partition data, only the statistics.
+ */
+public class PartitionStats implements Writable {
+    private int partitionId = -1;
+    private long vertexCount = 0;
+    private long finishedVertexCount = 0;
+    private long edgeCount = 0;
+
+    public PartitionStats() {}
+
+    public PartitionStats(int partitionId,
+                          long vertexCount,
+                          long finishedVertexCount,
+                          long edgeCount) {
+        this.partitionId = partitionId;
+        this.vertexCount = vertexCount;
+        this.finishedVertexCount = finishedVertexCount;
+        this.edgeCount = edgeCount;
+    }
+
+    public void setPartitionId(int partitionId) {
+        this.partitionId = partitionId;
+    }
+
+    public int getPartitionId() {
+        return partitionId;
+    }
+
+    public void incrVertexCount() {
+        ++vertexCount;
+    }
+
+    public long getVertexCount() {
+        return vertexCount;
+    }
+
+    public void incrFinishedVertexCount() {
+        ++finishedVertexCount;
+    }
+
+    public long getFinishedVertexCount() {
+        return finishedVertexCount;
+    }
+
+    public void addEdgeCount(long edgeCount) {
+        this.edgeCount += edgeCount;
+    }
+
+    public long getEdgeCount() {
+        return edgeCount;
+    }
+
+    @Override
+    public void readFields(DataInput input) throws IOException {
+        partitionId = input.readInt();
+        vertexCount = input.readLong();
+        finishedVertexCount = input.readLong();
+        edgeCount = input.readLong();
+    }
+
+    @Override
+    public void write(DataOutput output) throws IOException {
+        output.writeInt(partitionId);
+        output.writeLong(vertexCount);
+        output.writeLong(finishedVertexCount);
+        output.writeLong(edgeCount);
+    }
+
+    @Override
+    public String toString() {
+        return "(id=" + partitionId + ",vtx=" + vertexCount + ",finVtx=" +
+               finishedVertexCount + ",edges=" + edgeCount + ")";
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/partition/PartitionUtils.java b/src/main/java/org/apache/giraph/graph/partition/PartitionUtils.java
new file mode 100644
index 0000000..cb3fd4d
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/partition/PartitionUtils.java
@@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph.partition;
+
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+
+import org.apache.giraph.graph.VertexEdgeCount;
+import org.apache.giraph.graph.WorkerInfo;
+import org.apache.log4j.Logger;
+
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+
+/**
+ * Helper class for {@link Partition} related operations.
+ */
+public class PartitionUtils {
+    /** Class logger */
+    private static Logger LOG = Logger.getLogger(PartitionUtils.class);
+
+    private static class EdgeCountComparator implements
+            Comparator<Entry<WorkerInfo, VertexEdgeCount>> {
+
+        @Override
+        public int compare(Entry<WorkerInfo, VertexEdgeCount> worker1,
+                           Entry<WorkerInfo, VertexEdgeCount> worker2) {
+            return (int) (worker1.getValue().getEdgeCount() -
+                          worker2.getValue().getEdgeCount());
+        }
+    }
+
+    private static class VertexCountComparator implements
+            Comparator<Entry<WorkerInfo, VertexEdgeCount>> {
+
+        @Override
+        public int compare(Entry<WorkerInfo, VertexEdgeCount> worker1,
+                           Entry<WorkerInfo, VertexEdgeCount> worker2) {
+            return (int) (worker1.getValue().getEdgeCount() -
+                          worker2.getValue().getEdgeCount());
+        }
+    }
+
+    /**
+     * Check for imbalances on a per worker basis, by calculating the
+     * mean, high and low workers by edges and vertices.
+     */
+    public static void analyzePartitionStats(
+            Collection<PartitionOwner> partitionOwnerList,
+            List<PartitionStats> allPartitionStats) {
+        Map<Integer, PartitionOwner> idOwnerMap =
+            new HashMap<Integer, PartitionOwner>();
+        for (PartitionOwner partitionOwner : partitionOwnerList) {
+            if (idOwnerMap.put(partitionOwner.getPartitionId(),
+                               partitionOwner) != null) {
+                throw new IllegalStateException(
+                    "analyzePartitionStats: Duplicate partition " +
+                    partitionOwner);
+            }
+        }
+
+        Map<WorkerInfo, VertexEdgeCount> workerStatsMap = Maps.newHashMap();
+        VertexEdgeCount totalVertexEdgeCount = new VertexEdgeCount();
+        for (PartitionStats partitionStats : allPartitionStats) {
+            WorkerInfo workerInfo =
+                idOwnerMap.get(partitionStats.getPartitionId()).getWorkerInfo();
+            VertexEdgeCount vertexEdgeCount =
+                workerStatsMap.get(workerInfo);
+            if (vertexEdgeCount == null) {
+                workerStatsMap.put(
+                    workerInfo,
+                    new VertexEdgeCount(partitionStats.getVertexCount(),
+                                        partitionStats.getEdgeCount()));
+            } else {
+                workerStatsMap.put(
+                    workerInfo,
+                    vertexEdgeCount.incrVertexEdgeCount(
+                        partitionStats.getVertexCount(),
+                        partitionStats.getEdgeCount()));
+            }
+            totalVertexEdgeCount =
+                totalVertexEdgeCount.incrVertexEdgeCount(
+                    partitionStats.getVertexCount(),
+                    partitionStats.getEdgeCount());
+        }
+
+        List<Entry<WorkerInfo, VertexEdgeCount>> workerEntryList =
+            Lists.newArrayList(workerStatsMap.entrySet());
+
+        if (LOG.isInfoEnabled()) {
+            Collections.sort(workerEntryList, new VertexCountComparator());
+            LOG.info("analyzePartitionStats: Vertices - Mean: " +
+                    (totalVertexEdgeCount.getVertexCount() /
+                        workerStatsMap.size()) +
+                    ", Min: " +
+                    workerEntryList.get(0).getKey() + " - " +
+                    workerEntryList.get(0).getValue().getVertexCount() +
+                    ", Max: "+
+                    workerEntryList.get(workerEntryList.size() - 1).getKey() +
+                    " - " +
+                    workerEntryList.get(workerEntryList.size() - 1).
+                    getValue().getVertexCount());
+            Collections.sort(workerEntryList, new EdgeCountComparator());
+            LOG.info("analyzePartitionStats: Edges - Mean: " +
+                     (totalVertexEdgeCount.getEdgeCount() /
+                         workerStatsMap.size()) +
+                     ", Min: " +
+                     workerEntryList.get(0).getKey() + " - " +
+                     workerEntryList.get(0).getValue().getEdgeCount() +
+                     ", Max: "+
+                     workerEntryList.get(workerEntryList.size() - 1).getKey() +
+                     " - " +
+                     workerEntryList.get(workerEntryList.size() - 1).
+                     getValue().getEdgeCount());
+        }
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/partition/RangeMasterPartitioner.java b/src/main/java/org/apache/giraph/graph/partition/RangeMasterPartitioner.java
new file mode 100644
index 0000000..bf7e5a6
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/partition/RangeMasterPartitioner.java
@@ -0,0 +1,44 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph.partition;
+
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * Some functionality is provided, but this is meant for developers to
+ * determine the partitioning based on the actual types of data.  The
+ * implementation of several methods are left to the developer who is trying
+ * to control the amount of messages sent from one worker to another.
+ *
+ * @param <I> Vertex index value
+ * @param <V> Vertex value
+ * @param <E> Edge value
+ * @param <M> Message value
+ */
+@SuppressWarnings("rawtypes")
+public abstract class RangeMasterPartitioner<I extends WritableComparable,
+        V extends Writable, E extends Writable, M extends Writable> implements
+        MasterGraphPartitioner<I, V, E, M> {
+
+    @Override
+    public PartitionStats createPartitionStats() {
+        return new RangePartitionStats<I>();
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/partition/RangePartitionOwner.java b/src/main/java/org/apache/giraph/graph/partition/RangePartitionOwner.java
new file mode 100644
index 0000000..3b6f7f9
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/partition/RangePartitionOwner.java
@@ -0,0 +1,63 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph.partition;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.giraph.graph.BspUtils;
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * Added the max key index in to the {@link PartitionOwner}.  Also can provide
+ * a split hint if desired.
+ *
+ * @param <I> Vertex index type
+ */
+@SuppressWarnings("rawtypes")
+public class RangePartitionOwner<I extends WritableComparable>
+        extends BasicPartitionOwner {
+    /** Max index for this partition */
+    private I maxIndex;
+
+    public RangePartitionOwner() {
+    }
+
+    public RangePartitionOwner(I maxIndex) {
+        this.maxIndex = maxIndex;
+    }
+
+    public I getMaxIndex() {
+        return maxIndex;
+    }
+
+    @Override
+    public void readFields(DataInput input) throws IOException {
+        super.readFields(input);
+        maxIndex = BspUtils.<I>createVertexIndex(getConf());
+        maxIndex.readFields(input);
+    }
+
+    @Override
+    public void write(DataOutput output) throws IOException {
+        super.write(output);
+        maxIndex.write(output);
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/partition/RangePartitionStats.java b/src/main/java/org/apache/giraph/graph/partition/RangePartitionStats.java
new file mode 100644
index 0000000..2da2a4d
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/partition/RangePartitionStats.java
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph.partition;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * Same as {@link PartitionStats}, but also includes the hint for range-based
+ * partitioning.
+ *
+ * @param <I> Vertex index type
+ */
+@SuppressWarnings("rawtypes")
+public class RangePartitionStats<I extends WritableComparable>
+        extends PartitionStats {
+    /** Can be null if no hint, otherwise a splitting hint */
+    private RangeSplitHint<I> hint;
+
+    /**
+     * Get the range split hint (if any)
+     *
+     * @return Hint of how to split the range if desired, null otherwise
+     */
+    public RangeSplitHint<I> getRangeSplitHint() {
+        return hint;
+    }
+
+    @Override
+    public void readFields(DataInput input) throws IOException {
+        super.readFields(input);
+        boolean hintExists = input.readBoolean();
+        if (hintExists) {
+            hint = new RangeSplitHint<I>();
+            hint.readFields(input);
+        } else {
+            hint = null;
+        }
+    }
+
+    @Override
+    public void write(DataOutput output) throws IOException {
+        super.write(output);
+        output.writeBoolean(hint != null);
+        if (hint != null) {
+            hint.write(output);
+        }
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/partition/RangePartitionerFactory.java b/src/main/java/org/apache/giraph/graph/partition/RangePartitionerFactory.java
new file mode 100644
index 0000000..5855c0e
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/partition/RangePartitionerFactory.java
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph.partition;
+
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * Range partitioning will split the vertices by a key range based on a generic
+ * type.  This allows vertices that have some locality with the vertex ids
+ * to reduce the amount of messages sent.  The tradeoffs are that
+ * range partitioning is more susceptible to hot spots if the keys
+ * are not randomly distributed.  Another negative is the user must implement
+ * some of the functionality around how to split the key range.
+ *
+ * See {@link RangeWorkerPartitioner}
+ *
+ * @param <I> Vertex index value
+ * @param <V> Vertex value
+ * @param <E> Edge value
+ * @param <M> Message value
+ */
+@SuppressWarnings("rawtypes")
+public abstract class RangePartitionerFactory<I extends WritableComparable,
+    V extends Writable, E extends Writable, M extends Writable>
+    implements GraphPartitionerFactory<I, V, E, M> {
+}
diff --git a/src/main/java/org/apache/giraph/graph/partition/RangeSplitHint.java b/src/main/java/org/apache/giraph/graph/partition/RangeSplitHint.java
new file mode 100644
index 0000000..c275332
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/partition/RangeSplitHint.java
@@ -0,0 +1,73 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph.partition;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.giraph.graph.BspUtils;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * Hint to the {@link RangeMasterPartitioner} about how a
+ * {@link RangePartitionOwner} can be split.
+ *
+ * @param <I> Vertex index to split around
+ */
+@SuppressWarnings("rawtypes")
+public class RangeSplitHint<I extends WritableComparable>
+        implements Writable, Configurable {
+    /** Hinted split index */
+    private I splitIndex;
+    /** Number of vertices in this range before the split */
+    private long preSplitVertexCount;
+    /** Number of vertices in this range after the split */
+    private long postSplitVertexCount;
+    /** Configuration */
+    private Configuration conf;
+
+    @Override
+    public void readFields(DataInput input) throws IOException {
+        splitIndex = BspUtils.<I>createVertexIndex(conf);
+        splitIndex.readFields(input);
+        preSplitVertexCount = input.readLong();
+        postSplitVertexCount = input.readLong();
+    }
+
+    @Override
+    public void write(DataOutput output) throws IOException {
+        splitIndex.write(output);
+        output.writeLong(preSplitVertexCount);
+        output.writeLong(postSplitVertexCount);
+    }
+
+    @Override
+    public Configuration getConf() {
+        return conf;
+    }
+
+    @Override
+    public void setConf(Configuration conf) {
+        this.conf = conf;
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/partition/RangeWorkerPartitioner.java b/src/main/java/org/apache/giraph/graph/partition/RangeWorkerPartitioner.java
new file mode 100644
index 0000000..6e3f3e0
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/partition/RangeWorkerPartitioner.java
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph.partition;
+
+import java.util.Collection;
+import java.util.NavigableMap;
+import java.util.TreeMap;
+
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * Range partitioning will split the vertices by a key range based on a generic
+ * type.  This allows vertices that have some locality with the vertex ids
+ * to reduce the amount of messages sent.  The tradeoffs are that
+ * range partitioning is more susceptible to hot spots if the keys
+ * are not randomly distributed.  Another negative is the user must implement
+ * some of the functionality around how to split the key range.
+ *
+ * Note:  This implementation is incomplete, the developer must implement the
+ * various methods based on their index type.
+ *
+ * @param <I> Vertex index value
+ * @param <V> Vertex value
+ * @param <E> Edge value
+ * @param <M> Message value
+ */
+@SuppressWarnings("rawtypes")
+public abstract class RangeWorkerPartitioner<I extends WritableComparable,
+        V extends Writable, E extends Writable, M extends Writable> implements
+        WorkerGraphPartitioner<I, V, E, M> {
+    /** Mapping of the vertex ids to the {@link PartitionOwner} */
+    protected NavigableMap<I, RangePartitionOwner<I>> vertexRangeMap =
+        new TreeMap<I, RangePartitionOwner<I>>();
+
+    @Override
+    public PartitionOwner createPartitionOwner() {
+        return new RangePartitionOwner<I>();
+    }
+
+    @Override
+    public PartitionOwner getPartitionOwner(I vertexId) {
+        // Find the partition owner based on the maximum partition id.
+        // If the vertex id exceeds any of the maximum partition ids, give
+        // it to the last one
+        if (vertexId == null) {
+            throw new IllegalArgumentException(
+                "getPartitionOwner: Illegal null vertex id");
+        }
+        I maxVertexIndex = vertexRangeMap.ceilingKey(vertexId);
+        if (maxVertexIndex == null) {
+            return vertexRangeMap.lastEntry().getValue();
+        } else {
+            return vertexRangeMap.get(vertexId);
+        }
+    }
+
+    @Override
+    public Collection<? extends PartitionOwner> getPartitionOwners() {
+        return vertexRangeMap.values();
+    }
+}
diff --git a/src/main/java/org/apache/giraph/graph/partition/WorkerGraphPartitioner.java b/src/main/java/org/apache/giraph/graph/partition/WorkerGraphPartitioner.java
new file mode 100644
index 0000000..ce10680
--- /dev/null
+++ b/src/main/java/org/apache/giraph/graph/partition/WorkerGraphPartitioner.java
@@ -0,0 +1,90 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.graph.partition;
+
+import java.util.Collection;
+import java.util.Map;
+
+import org.apache.giraph.graph.WorkerInfo;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * Stores the {@link PartitionOwner} objects from the master and provides the
+ * mapping of vertex to {@link PartitionOwner}. Also generates the partition
+ * owner implementation.
+ */
+@SuppressWarnings("rawtypes")
+public interface WorkerGraphPartitioner<I extends WritableComparable,
+        V extends Writable, E extends Writable, M extends Writable> {
+    /**
+     * Instantiate the {@link PartitionOwner} implementation used to read the
+     * master assignments.
+     *
+     * @return Instantiated {@link PartitionOwner} object
+     */
+    PartitionOwner createPartitionOwner();
+
+    /**
+     * Figure out the owner of a vertex
+     *
+     * @param vertexId Vertex id to get the partition for
+     * @return Correct partition owner
+     */
+    PartitionOwner getPartitionOwner(I vertexId);
+
+    /**
+     * At the end of a superstep, workers have {@link PartitionStats} generated
+     * for each of their partitions.  This method will allow the user to
+     * modify or create their own {@link PartitionStats} interfaces to send to
+     * the master.
+     *
+     * @param workerPartitionStats Stats generated by the infrastructure during
+     *        the superstep
+     * @param partitionMap Map of all the partitions owned by this worker
+     *        (could be used to provide more useful stat information)
+     * @return Final partition stats
+     */
+    Collection<PartitionStats> finalizePartitionStats(
+            Collection<PartitionStats> workerPartitionStats,
+            Map<Integer, Partition<I, V, E, M>> partitionMap);
+
+    /**
+     * Get the partitions owners and update locally.  Returns the partitions
+     * to send to other workers and other dependencies.
+     *
+     * @param myWorkerInfo Worker info.
+     * @param masterSetPartitionOwners Master set partition owners, received
+     *        prior to beginning the superstep
+     * @param partitionMap Map of all the partitions owned by this worker
+     *        (can be used to fill the return map of partitions to send)
+     * @return Information for the partition exchange.
+     */
+    PartitionExchange updatePartitionOwners(
+            WorkerInfo myWorkerInfo,
+            Collection<? extends PartitionOwner> masterSetPartitionOwners,
+            Map<Integer, Partition<I, V, E, M>> partitionMap);
+
+    /**
+     * Get a collection of the {@link PartitionOwner} objects.
+     *
+     * @return Collection of owners for every partition.
+     */
+    Collection<? extends PartitionOwner> getPartitionOwners();
+}
diff --git a/src/main/java/org/apache/giraph/hadoop/BspPolicyProvider.java b/src/main/java/org/apache/giraph/hadoop/BspPolicyProvider.java
new file mode 100644
index 0000000..d5d6398
--- /dev/null
+++ b/src/main/java/org/apache/giraph/hadoop/BspPolicyProvider.java
@@ -0,0 +1,40 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.hadoop;
+
+import org.apache.giraph.comm.CommunicationsInterface;
+import org.apache.hadoop.security.authorize.PolicyProvider;
+import org.apache.hadoop.security.authorize.Service;
+
+/**
+  * {@link PolicyProvider} for Map-Reduce protocols.
+  */
+public class BspPolicyProvider extends PolicyProvider {
+    private static final Service[] bspCommunicationsServices =
+        new Service[] {
+            new Service("security.bsp.communications.protocol.acl",
+                        CommunicationsInterface.class),
+    };
+
+    @Override
+    public Service[] getServices() {
+        return bspCommunicationsServices;
+    }
+}
+
diff --git a/src/main/java/org/apache/giraph/hadoop/BspTokenSelector.java b/src/main/java/org/apache/giraph/hadoop/BspTokenSelector.java
new file mode 100644
index 0000000..458c0b9
--- /dev/null
+++ b/src/main/java/org/apache/giraph/hadoop/BspTokenSelector.java
@@ -0,0 +1,51 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.hadoop;
+
+import java.util.Collection;
+
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.security.token.TokenIdentifier;
+import org.apache.hadoop.security.token.TokenSelector;
+
+/**
+  * Look through tokens to find the first job token that matches the service
+  * and return it.
+  */
+public class BspTokenSelector implements TokenSelector<JobTokenIdentifier> {
+
+    @SuppressWarnings("unchecked")
+    @Override
+    public Token<JobTokenIdentifier> selectToken(Text service,
+            Collection<Token<? extends TokenIdentifier>> tokens) {
+        if (service == null) {
+            return null;
+        }
+        Text KIND_NAME = new Text("mapreduce.job");
+        for (Token<? extends TokenIdentifier> token : tokens) {
+            if (KIND_NAME.equals(token.getKind())) {
+                return (Token<JobTokenIdentifier>) token;
+            }
+        }
+        return null;
+    }
+}
+
diff --git a/src/main/java/org/apache/giraph/integration/SuperstepHashPartitionerFactory.java b/src/main/java/org/apache/giraph/integration/SuperstepHashPartitionerFactory.java
new file mode 100644
index 0000000..02b6157
--- /dev/null
+++ b/src/main/java/org/apache/giraph/integration/SuperstepHashPartitionerFactory.java
@@ -0,0 +1,121 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.integration;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+
+import org.apache.giraph.graph.WorkerInfo;
+import org.apache.giraph.graph.partition.BasicPartitionOwner;
+import org.apache.giraph.graph.partition.HashMasterPartitioner;
+import org.apache.giraph.graph.partition.HashPartitionerFactory;
+import org.apache.giraph.graph.partition.MasterGraphPartitioner;
+import org.apache.giraph.graph.partition.PartitionOwner;
+import org.apache.giraph.graph.partition.PartitionStats;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.log4j.Logger;
+
+/**
+ * Example graph partitioner that builds on {@link HashMasterPartitioner} to
+ * send the partitions to the worker that matches the superstep.  It is for
+ * testing only and should never be used in practice.
+ */
+@SuppressWarnings("rawtypes")
+public class SuperstepHashPartitionerFactory<
+        I extends WritableComparable,
+        V extends Writable, E extends Writable, M extends Writable>
+        extends HashPartitionerFactory<I, V, E, M> {
+
+    /**
+     * Changes the {@link HashMasterPartitioner} to make ownership of the
+     * partitions based on a superstep.  For testing only as it is totally
+     * unbalanced.
+     *
+     * @param <I> vertex id
+     * @param <V> vertex data
+     * @param <E> edge data
+     * @param <M> message data
+     */
+    private static class SuperstepMasterPartition<
+            I extends WritableComparable,
+            V extends Writable, E extends Writable, M extends Writable>
+            extends HashMasterPartitioner<I, V, E, M> {
+        /** Class logger */
+        private static Logger LOG =
+            Logger.getLogger(SuperstepMasterPartition.class);
+
+        public SuperstepMasterPartition(Configuration conf) {
+            super(conf);
+        }
+
+        @Override
+        public Collection<PartitionOwner> generateChangedPartitionOwners(
+                Collection<PartitionStats> allPartitionStatsList,
+                Collection<WorkerInfo> availableWorkerInfos,
+                int maxWorkers,
+                long superstep) {
+            // Assign all the partitions to
+            // superstep mod availableWorkerInfos
+            // Guaranteed to be different if the workers (and their order)
+            // do not change
+            long workerIndex = superstep % availableWorkerInfos.size();
+            int i = 0;
+            WorkerInfo chosenWorkerInfo = null;
+            for (WorkerInfo workerInfo : availableWorkerInfos) {
+                if (workerIndex == i) {
+                    chosenWorkerInfo = workerInfo;
+                }
+                ++i;
+            }
+            if (LOG.isInfoEnabled()) {
+                LOG.info("generateChangedPartitionOwners: Chosen worker " +
+                         "for superstep " + superstep + " is " +
+                         chosenWorkerInfo);
+            }
+
+            List<PartitionOwner> partitionOwnerList =
+                new ArrayList<PartitionOwner>();
+            for (PartitionOwner partitionOwner :
+                    getCurrentPartitionOwners()) {
+                WorkerInfo prevWorkerinfo =
+                    partitionOwner.getWorkerInfo().equals(chosenWorkerInfo) ?
+                        null : partitionOwner.getWorkerInfo();
+                PartitionOwner tmpPartitionOwner =
+                    new BasicPartitionOwner(partitionOwner.getPartitionId(),
+                                            chosenWorkerInfo,
+                                            prevWorkerinfo,
+                                            null);
+                partitionOwnerList.add(tmpPartitionOwner);
+                LOG.info("partition owner was " + partitionOwner +
+                         ", new " + tmpPartitionOwner);
+            }
+            setPartitionOwnerList(partitionOwnerList);
+            return partitionOwnerList;
+        }
+    }
+
+    @Override
+    public MasterGraphPartitioner<I, V, E, M>
+            createMasterGraphPartitioner() {
+        return new SuperstepMasterPartition<I, V, E, M>(getConf());
+    }
+}
diff --git a/src/main/java/org/apache/giraph/lib/AdjacencyListTextVertexOutputFormat.java b/src/main/java/org/apache/giraph/lib/AdjacencyListTextVertexOutputFormat.java
new file mode 100644
index 0000000..d966dae
--- /dev/null
+++ b/src/main/java/org/apache/giraph/lib/AdjacencyListTextVertexOutputFormat.java
@@ -0,0 +1,83 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.giraph.lib;
+
+import org.apache.giraph.graph.BasicVertex;
+import org.apache.giraph.graph.VertexWriter;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.RecordWriter;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+
+import java.io.IOException;
+
+/**
+ * OutputFormat to write out the graph nodes as text, value-separated (by
+ * tabs, by default).  With the default delimiter, a vertex is written out as:
+ *
+ * <VertexId><tab><Vertex Value><tab>[<EdgeId><tab><EdgeValue>]+
+ *
+ * @param <I> Vertex index value
+ * @param <V> Vertex value
+ * @param <E> Edge value
+ */
+@SuppressWarnings("rawtypes")
+public class AdjacencyListTextVertexOutputFormat <I extends WritableComparable,
+    V extends Writable, E extends Writable> extends TextVertexOutputFormat<I, V, E>{
+
+  static class AdjacencyListVertexWriter<I extends WritableComparable, V extends
+      Writable, E extends Writable> extends TextVertexWriter<I, V, E> {
+    public static final String LINE_TOKENIZE_VALUE = "output.delimiter";
+    public static final String LINE_TOKENIZE_VALUE_DEFAULT = "\t";
+
+    private String delimiter;
+
+    public AdjacencyListVertexWriter(RecordWriter<Text,Text> recordWriter) {
+      super(recordWriter);
+    }
+
+    @Override
+    public void writeVertex(BasicVertex<I, V, E, ?> vertex) throws IOException,
+        InterruptedException {
+      if (delimiter == null) {
+        delimiter = getContext().getConfiguration()
+           .get(LINE_TOKENIZE_VALUE, LINE_TOKENIZE_VALUE_DEFAULT);
+      }
+
+      StringBuffer sb = new StringBuffer(vertex.getVertexId().toString());
+      sb.append(delimiter);
+      sb.append(vertex.getVertexValue().toString());
+
+      for (I edge : vertex) {
+        sb.append(delimiter).append(edge);
+        sb.append(delimiter).append(vertex.getEdgeValue(edge));
+      }
+
+      getRecordWriter().write(new Text(sb.toString()), null);
+    }
+  }
+
+  @Override
+  public VertexWriter<I, V, E> createVertexWriter(TaskAttemptContext context)
+      throws IOException, InterruptedException {
+    return new AdjacencyListVertexWriter<I, V, E>
+        (textOutputFormat.getRecordWriter(context));
+  }
+
+}
diff --git a/src/main/java/org/apache/giraph/lib/AdjacencyListVertexReader.java b/src/main/java/org/apache/giraph/lib/AdjacencyListVertexReader.java
new file mode 100644
index 0000000..b9ccd6c
--- /dev/null
+++ b/src/main/java/org/apache/giraph/lib/AdjacencyListVertexReader.java
@@ -0,0 +1,143 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.giraph.lib;
+
+import com.google.common.collect.Maps;
+import org.apache.giraph.graph.BasicVertex;
+import org.apache.giraph.graph.BspUtils;
+import org.apache.giraph.graph.Edge;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.RecordReader;
+
+import java.io.IOException;
+import java.util.Map;
+
+/**
+ * VertexReader that readers lines of text with vertices encoded as adjacency
+ * lists and converts each token to the correct type.  For example, a graph
+ * with vertices as integers and values as doubles could be encoded as:
+ *   1 0.1 2 0.2 3 0.3
+ * to represent a vertex named 1, with 0.1 as its value and two edges, to
+ * vertices 2 and 3, with edge values of 0.2 and 0.3, respectively.
+ *
+ * @param <I> Vertex index value
+ * @param <V> Vertex value
+ * @param <E> Edge value
+ */
+@SuppressWarnings("rawtypes")
+public abstract class AdjacencyListVertexReader<I extends WritableComparable,
+    V extends Writable, E extends Writable, M extends Writable> extends
+    TextVertexInputFormat.TextVertexReader<I, V, E, M> {
+
+  public static final String LINE_TOKENIZE_VALUE = "adj.list.input.delimiter";
+  public static final String LINE_TOKENIZE_VALUE_DEFAULT = "\t";
+
+  private String splitValue = null;
+
+  /**
+   * Utility for doing any cleaning of each line before it is tokenized.
+   */
+  public interface LineSanitizer {
+    /**
+     * Clean string s before attempting to tokenize it.
+     */
+    public String sanitize(String s);
+  }
+
+  private LineSanitizer sanitizer = null;
+
+  public AdjacencyListVertexReader(RecordReader<LongWritable, Text> lineRecordReader) {
+    super(lineRecordReader);
+  }
+
+  public AdjacencyListVertexReader(RecordReader<LongWritable, Text> lineRecordReader,
+      LineSanitizer sanitizer) {
+    super(lineRecordReader);
+    this.sanitizer = sanitizer;
+  }
+
+  /**
+   * Store the Id for this line in an instance of its correct type.
+   * @param s Id of vertex from line
+   * @param id Instance of Id's type, in which to store its value
+   */
+  abstract public void decodeId(String s, I id);
+
+  /**
+   * Store the value for this line in an instance of its correct type.
+   * @param s Value from line
+   * @param value Instance of value's type, in which to store its value
+   */
+  abstract public void decodeValue(String s, V value);
+
+  /**
+   * Store an edge from the line into an instance of a correctly typed Edge
+   * @param id The edge's id from the line
+   * @param value The edge's value from the line
+   * @param edge Instance of edge in which to store the id and value
+   */
+  abstract public void decodeEdge(String id, String value, Edge<I, E> edge);
+
+
+  @Override
+  public boolean nextVertex() throws IOException, InterruptedException {
+    return getRecordReader().nextKeyValue();
+  }
+
+  @Override
+  public BasicVertex<I, V, E, M> getCurrentVertex() throws IOException, InterruptedException {
+    Configuration conf = getContext().getConfiguration();
+    String line = getRecordReader().getCurrentValue().toString();
+    BasicVertex<I, V, E, M> vertex = BspUtils.createVertex(conf);
+
+    if (sanitizer != null) {
+      line = sanitizer.sanitize(line);
+    }
+
+    if (splitValue == null) {
+      splitValue = conf.get(LINE_TOKENIZE_VALUE, LINE_TOKENIZE_VALUE_DEFAULT);
+    }
+
+    String [] values = line.split(splitValue);
+
+    if ((values.length < 2) || (values.length % 2 != 0)) {
+      throw new IllegalArgumentException("Line did not split correctly: " + line);
+    }
+
+    I vertexId = BspUtils.<I>createVertexIndex(conf);
+    decodeId(values[0], vertexId);
+
+    V value = BspUtils.<V>createVertexValue(conf);
+    decodeValue(values[1], value);
+
+    int i = 2;
+    Map<I, E> edges = Maps.newHashMap();
+    Edge<I, E> edge = new Edge<I, E>();
+    while(i < values.length) {
+      decodeEdge(values[i], values[i + 1], edge);
+      edges.put(edge.getDestVertexId(), edge.getEdgeValue());
+      i += 2;
+    }
+    vertex.initialize(vertexId, value, edges, null);
+    return vertex;
+  }
+}
diff --git a/src/main/java/org/apache/giraph/lib/IdWithValueTextOutputFormat.java b/src/main/java/org/apache/giraph/lib/IdWithValueTextOutputFormat.java
new file mode 100644
index 0000000..9d2e088
--- /dev/null
+++ b/src/main/java/org/apache/giraph/lib/IdWithValueTextOutputFormat.java
@@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.lib;
+
+
+import org.apache.giraph.graph.BasicVertex;
+import org.apache.giraph.graph.VertexWriter;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.RecordWriter;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+
+import java.io.IOException;
+
+/**
+ * Write out Vertices' IDs and values, but not their edges nor edges' values.
+ * This is a useful output format when the final value of the vertex is
+ * all that's needed. The boolean configuration parameter reverse.id.and.value
+ * allows reversing the output of id and value.
+ *
+ * @param <I> Vertex index value
+ * @param <V> Vertex value
+ * @param <E> Edge value
+ */
+@SuppressWarnings("rawtypes")
+public class IdWithValueTextOutputFormat <I extends WritableComparable,
+    V extends Writable, E extends Writable> extends TextVertexOutputFormat<I, V, E>{
+
+  static class IdWithValueVertexWriter<I extends WritableComparable, V extends
+      Writable, E extends Writable> extends TextVertexWriter<I, V, E> {
+
+    public static final String LINE_TOKENIZE_VALUE = "output.delimiter";
+    public static final String LINE_TOKENIZE_VALUE_DEFAULT = "\t";
+
+    public static final String REVERSE_ID_AND_VALUE = "reverse.id.and.value";
+    public static final boolean REVERSE_ID_AND_VALUE_DEFAULT = false;
+
+    private String delimiter;
+
+    public IdWithValueVertexWriter(RecordWriter<Text, Text> recordWriter) {
+      super(recordWriter);
+    }
+
+    @Override
+    public void writeVertex(BasicVertex<I, V, E, ?> vertex) throws IOException,
+        InterruptedException {
+      if (delimiter == null) {
+        delimiter = getContext().getConfiguration()
+           .get(LINE_TOKENIZE_VALUE, LINE_TOKENIZE_VALUE_DEFAULT);
+      }
+
+      String first;
+      String second;
+      boolean reverseOutput = getContext().getConfiguration()
+          .getBoolean(REVERSE_ID_AND_VALUE, REVERSE_ID_AND_VALUE_DEFAULT);
+
+      if (reverseOutput) {
+        first = vertex.getVertexValue().toString();
+        second = vertex.getVertexId().toString();
+      } else {
+        first = vertex.getVertexId().toString();
+        second = vertex.getVertexValue().toString();
+      }
+
+      Text line = new Text(first + delimiter + second);
+
+      getRecordWriter().write(line, null);
+    }
+  }
+
+  @Override
+  public VertexWriter<I, V, E> createVertexWriter(TaskAttemptContext context)
+      throws IOException, InterruptedException {
+    return new IdWithValueVertexWriter<I, V, E>
+        (textOutputFormat.getRecordWriter(context));
+  }
+
+}
diff --git a/src/main/java/org/apache/giraph/lib/JsonBase64VertexFormat.java b/src/main/java/org/apache/giraph/lib/JsonBase64VertexFormat.java
new file mode 100644
index 0000000..c4164d3
--- /dev/null
+++ b/src/main/java/org/apache/giraph/lib/JsonBase64VertexFormat.java
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.lib;
+
+/**
+ * Keeps the vertex keys for the input/output vertex format
+ */
+public interface JsonBase64VertexFormat {
+    /** Vertex id key */
+    public static final String VERTEX_ID_KEY = "vertexId";
+    /** Vertex value key*/
+    public static final String VERTEX_VALUE_KEY = "vertexValue";
+    /** Edge value array key (all the edges are stored here) */
+    public static final String EDGE_ARRAY_KEY = "edgeArray";
+}
diff --git a/src/main/java/org/apache/giraph/lib/JsonBase64VertexInputFormat.java b/src/main/java/org/apache/giraph/lib/JsonBase64VertexInputFormat.java
new file mode 100644
index 0000000..5111a87
--- /dev/null
+++ b/src/main/java/org/apache/giraph/lib/JsonBase64VertexInputFormat.java
@@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.lib;
+
+import com.google.common.collect.Maps;
+import net.iharder.Base64;
+import org.apache.giraph.graph.BasicVertex;
+import org.apache.giraph.graph.BspUtils;
+import org.apache.giraph.graph.Edge;
+import org.apache.giraph.graph.VertexReader;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.RecordReader;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.json.JSONArray;
+import org.json.JSONException;
+import org.json.JSONObject;
+
+import java.io.ByteArrayInputStream;
+import java.io.DataInput;
+import java.io.DataInputStream;
+import java.io.IOException;
+import java.util.Map;
+
+/**
+ * Simple way to represent the structure of the graph with a JSON object.
+ * The actual vertex ids, values, edges are stored by the
+ * Writable serialized bytes that are Byte64 encoded.
+ * Works with {@link JsonBase64VertexOutputFormat}
+ *
+ * @param <I> Vertex index value
+ * @param <V> Vertex value
+ * @param <E> Edge value
+ */
+@SuppressWarnings("rawtypes")
+public class JsonBase64VertexInputFormat<
+        I extends WritableComparable, V extends Writable, E extends Writable,
+        M extends Writable>
+        extends TextVertexInputFormat<I, V, E, M> implements
+        JsonBase64VertexFormat {
+    /**
+     * Simple reader that supports {@link JsonBase64VertexInputFormat}
+     *
+     * @param <I> Vertex index value
+     * @param <V> Vertex value
+     * @param <E> Edge value
+     */
+    private static class JsonBase64VertexReader<
+            I extends WritableComparable, V extends Writable,
+            E extends Writable, M extends Writable> extends TextVertexReader<I, V, E, M> {
+        /**
+         * Only constructor.  Requires the LineRecordReader
+         *
+         * @param lineRecordReader Line record reader to read from
+         */
+        public JsonBase64VertexReader(RecordReader<LongWritable, Text> lineRecordReader) {
+            super(lineRecordReader);
+        }
+
+        @Override
+        public boolean nextVertex() throws IOException, InterruptedException {
+            return getRecordReader().nextKeyValue();
+        }
+
+        @Override
+        public BasicVertex<I, V, E, M> getCurrentVertex()
+                throws IOException, InterruptedException {
+            Configuration conf = getContext().getConfiguration();
+            BasicVertex<I, V, E, M> vertex = BspUtils.createVertex(conf);
+
+            Text line = getRecordReader().getCurrentValue();
+            JSONObject vertexObject;
+            try {
+                vertexObject = new JSONObject(line.toString());
+            } catch (JSONException e) {
+                throw new IllegalArgumentException(
+                    "next: Failed to get the vertex", e);
+            }
+            DataInput input = null;
+            byte[] decodedWritable = null;
+            I vertexId = null;
+            try {
+                decodedWritable = Base64.decode(
+                    vertexObject.getString(VERTEX_ID_KEY));
+                input = new DataInputStream(
+                    new ByteArrayInputStream(decodedWritable));
+                vertexId = BspUtils.<I>createVertexIndex(conf);
+                vertexId.readFields(input);
+            } catch (JSONException e) {
+                throw new IllegalArgumentException(
+                    "next: Failed to get vertex id", e);
+            }
+            V vertexValue = null;
+            try {
+                decodedWritable = Base64.decode(
+                    vertexObject.getString(VERTEX_VALUE_KEY));
+                input = new DataInputStream(
+                    new ByteArrayInputStream(decodedWritable));
+                vertexValue = BspUtils.<V>createVertexValue(conf);
+                vertexValue.readFields(input);
+            } catch (JSONException e) {
+                throw new IllegalArgumentException(
+                    "next: Failed to get vertex value", e);
+            }
+            JSONArray edgeArray = null;
+            try {
+                edgeArray = vertexObject.getJSONArray(EDGE_ARRAY_KEY);
+            } catch (JSONException e) {
+                throw new IllegalArgumentException(
+                    "next: Failed to get edge array", e);
+            }
+            Map<I, E> edgeMap = Maps.newHashMap();
+            for (int i = 0; i < edgeArray.length(); ++i) {
+                try {
+                    decodedWritable =
+                        Base64.decode(edgeArray.getString(i));
+                } catch (JSONException e) {
+                    throw new IllegalArgumentException(
+                        "next: Failed to get edge value", e);
+                }
+                input = new DataInputStream(
+                    new ByteArrayInputStream(decodedWritable));
+                Edge<I, E> edge = new Edge<I, E>();
+                edge.setConf(getContext().getConfiguration());
+                edge.readFields(input);
+                edgeMap.put(edge.getDestVertexId(), edge.getEdgeValue());
+            }
+            vertex.initialize(vertexId, vertexValue, edgeMap, null);
+            return vertex;
+        }
+    }
+
+    @Override
+    public VertexReader<I, V, E, M> createVertexReader(
+            InputSplit split,
+            TaskAttemptContext context) throws IOException {
+        return new JsonBase64VertexReader<I, V, E, M>(textInputFormat.createRecordReader(split,
+            context));
+    }
+}
diff --git a/src/main/java/org/apache/giraph/lib/JsonBase64VertexOutputFormat.java b/src/main/java/org/apache/giraph/lib/JsonBase64VertexOutputFormat.java
new file mode 100644
index 0000000..fbd5b03
--- /dev/null
+++ b/src/main/java/org/apache/giraph/lib/JsonBase64VertexOutputFormat.java
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.lib;
+
+import net.iharder.Base64;
+import org.apache.giraph.graph.BasicVertex;
+import org.apache.giraph.graph.Edge;
+import org.apache.giraph.graph.VertexWriter;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.RecordWriter;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.json.JSONArray;
+import org.json.JSONException;
+import org.json.JSONObject;
+
+import java.io.ByteArrayOutputStream;
+import java.io.DataOutput;
+import java.io.DataOutputStream;
+import java.io.IOException;
+
+/**
+ * Simple way to represent the structure of the graph with a JSON object.
+ * The actual vertex ids, values, edges are stored by the
+ * Writable serialized bytes that are Byte64 encoded.
+ * Works with {@link JsonBase64VertexInputFormat}
+ *
+ * @param <I> Vertex index value
+ * @param <V> Vertex value
+ * @param <E> Edge value
+ */
+@SuppressWarnings("rawtypes")
+public class JsonBase64VertexOutputFormat<
+        I extends WritableComparable, V extends Writable, E extends Writable>
+        extends TextVertexOutputFormat<I, V, E>
+        implements JsonBase64VertexFormat {
+    /**
+     * Simple writer that supports {@link JsonBase64VertexOutputFormat}
+     *
+     * @param <I> Vertex index value
+     * @param <V> Vertex value
+     * @param <E> Edge value
+     */
+    private static class JsonBase64VertexWriter<
+            I extends WritableComparable, V extends Writable,
+            E extends Writable> extends TextVertexWriter<I, V, E> {
+        /**
+         * Only constructor.  Requires the LineRecordWriter
+         *
+         * @param lineRecordWriter Line record writer to write to
+         */
+        public JsonBase64VertexWriter(
+                RecordWriter<Text, Text> lineRecordWriter) {
+            super(lineRecordWriter);
+        }
+
+        @Override
+        public void writeVertex(BasicVertex<I, V, E, ?> vertex)
+                throws IOException, InterruptedException {
+            ByteArrayOutputStream outputStream =
+                new ByteArrayOutputStream();
+            DataOutput output = new DataOutputStream(outputStream);
+            JSONObject vertexObject = new JSONObject();
+            vertex.getVertexId().write(output);
+            try {
+                vertexObject.put(
+                    VERTEX_ID_KEY,
+                    Base64.encodeBytes(outputStream.toByteArray()));
+            } catch (JSONException e) {
+                throw new IllegalStateException(
+                    "writerVertex: Failed to insert vertex id", e);
+            }
+            outputStream.reset();
+            vertex.getVertexValue().write(output);
+            try {
+                vertexObject.put(
+                    VERTEX_VALUE_KEY,
+                    Base64.encodeBytes(outputStream.toByteArray()));
+            } catch (JSONException e) {
+                throw new IllegalStateException(
+                    "writerVertex: Failed to insert vertex value", e);
+            }
+            JSONArray edgeArray = new JSONArray();
+            for (I targetVertexId : vertex) {
+                Edge<I, E> edge = new Edge<I, E>(
+                    targetVertexId, vertex.getEdgeValue(targetVertexId));
+                edge.setConf(getContext().getConfiguration());
+                outputStream.reset();
+                edge.write(output);
+                edgeArray.put(Base64.encodeBytes(outputStream.toByteArray()));
+            }
+            try {
+                vertexObject.put(EDGE_ARRAY_KEY, edgeArray);
+            } catch (JSONException e) {
+                throw new IllegalStateException(
+                    "writerVertex: Failed to insert edge array", e);
+            }
+            getRecordWriter().write(new Text(vertexObject.toString()), null);
+        }
+    }
+
+    @Override
+    public VertexWriter<I, V, E> createVertexWriter(TaskAttemptContext context)
+            throws IOException, InterruptedException {
+        return new JsonBase64VertexWriter<I, V, E>(
+            textOutputFormat.getRecordWriter(context));
+    }
+
+}
diff --git a/src/main/java/org/apache/giraph/lib/LongDoubleDoubleAdjacencyListVertexInputFormat.java b/src/main/java/org/apache/giraph/lib/LongDoubleDoubleAdjacencyListVertexInputFormat.java
new file mode 100644
index 0000000..c9eb527
--- /dev/null
+++ b/src/main/java/org/apache/giraph/lib/LongDoubleDoubleAdjacencyListVertexInputFormat.java
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.giraph.lib;
+
+import org.apache.giraph.graph.Edge;
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.RecordReader;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+
+import java.io.IOException;
+
+/**
+ * InputFormat for reading graphs stored as (ordered) adjacency lists
+ * with the vertex ids longs and the vertex values and edges doubles.
+ * For example:
+ * 22 0.1 45 0.3 99 0.44
+ * to repesent a vertex with id 22, value of 0.1 and edges to nodes 45 and 99,
+ * with values of 0.3 and 0.44, respectively.
+ */
+public class LongDoubleDoubleAdjacencyListVertexInputFormat<M extends Writable> extends
+    TextVertexInputFormat<LongWritable, DoubleWritable, DoubleWritable, M>  {
+
+  static class VertexReader<M extends Writable> extends
+      AdjacencyListVertexReader<LongWritable, DoubleWritable, DoubleWritable, M> {
+
+    VertexReader(RecordReader<LongWritable, Text> lineRecordReader) {
+      super(lineRecordReader);
+    }
+
+    VertexReader(RecordReader<LongWritable, Text> lineRecordReader,
+                 LineSanitizer sanitizer) {
+      super(lineRecordReader, sanitizer);
+    }
+
+    @Override
+    public void decodeId(String s, LongWritable id) {
+      id.set(Long.valueOf(s));
+    }
+
+    @Override
+    public void decodeValue(String s, DoubleWritable value) {
+      value.set(Double.valueOf(s));
+    }
+
+    @Override
+    public void decodeEdge(String s1, String s2, Edge<LongWritable, DoubleWritable>
+        textIntWritableEdge) {
+      textIntWritableEdge.setDestVertexId(new LongWritable(Long.valueOf(s1)));
+      textIntWritableEdge.setEdgeValue(new DoubleWritable(Double.valueOf(s2)));
+    }
+  }
+
+  @Override
+  public org.apache.giraph.graph.VertexReader<LongWritable,
+    DoubleWritable, DoubleWritable, M> createVertexReader(
+      InputSplit split,
+      TaskAttemptContext context) throws IOException {
+    return new VertexReader<M>(textInputFormat.createRecordReader(
+      split, context));
+  }
+}
diff --git a/src/main/java/org/apache/giraph/lib/SequenceFileVertexInputFormat.java b/src/main/java/org/apache/giraph/lib/SequenceFileVertexInputFormat.java
new file mode 100644
index 0000000..41a19c2
--- /dev/null
+++ b/src/main/java/org/apache/giraph/lib/SequenceFileVertexInputFormat.java
@@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.giraph.lib;
+
+import org.apache.giraph.graph.BasicVertex;
+import org.apache.giraph.graph.VertexInputFormat;
+import org.apache.giraph.graph.VertexReader;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.RecordReader;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat;
+
+import java.io.IOException;
+import java.util.List;
+
+public class SequenceFileVertexInputFormat<I extends WritableComparable<I>,
+                                           V extends Writable,
+                                           E extends Writable,
+                                           M extends Writable,
+                                           X extends BasicVertex<I, V, E, M>>
+    extends VertexInputFormat<I, V, E, M> {
+  protected SequenceFileInputFormat<I, X> sequenceFileInputFormat
+      = new SequenceFileInputFormat<I, X>();
+
+  @Override public List<InputSplit> getSplits(JobContext context, int numWorkers)
+      throws IOException, InterruptedException {
+    return sequenceFileInputFormat.getSplits(context);
+  }
+
+  @Override
+  public VertexReader<I, V, E, M> createVertexReader(InputSplit split,
+      TaskAttemptContext context)
+      throws IOException {
+    return new SequenceFileVertexReader<I, V, E, M, X>(
+        sequenceFileInputFormat.createRecordReader(split, context));
+  }
+
+  public static class SequenceFileVertexReader<I extends WritableComparable<I>,
+      V extends Writable, E extends Writable, M extends Writable,
+      X extends BasicVertex<I, V, E, M>>
+      implements VertexReader<I, V, E, M> {
+    private final RecordReader<I, X> recordReader;
+
+    public SequenceFileVertexReader(RecordReader<I, X> recordReader) {
+      this.recordReader = recordReader;
+    }
+
+    @Override public void initialize(InputSplit inputSplit, TaskAttemptContext context)
+        throws IOException, InterruptedException {
+      recordReader.initialize(inputSplit, context);
+    }
+
+    @Override public boolean nextVertex() throws IOException, InterruptedException {
+      return recordReader.nextKeyValue();
+    }
+
+    @Override public BasicVertex<I, V, E, M> getCurrentVertex()
+        throws IOException, InterruptedException {
+      return recordReader.getCurrentValue();
+    }
+
+
+    @Override public void close() throws IOException {
+      recordReader.close();
+    }
+
+    @Override public float getProgress() throws IOException, InterruptedException {
+      return recordReader.getProgress();
+    }
+  }
+}
diff --git a/src/main/java/org/apache/giraph/lib/TextDoubleDoubleAdjacencyListVertexInputFormat.java b/src/main/java/org/apache/giraph/lib/TextDoubleDoubleAdjacencyListVertexInputFormat.java
new file mode 100644
index 0000000..9b2d69c
--- /dev/null
+++ b/src/main/java/org/apache/giraph/lib/TextDoubleDoubleAdjacencyListVertexInputFormat.java
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.giraph.lib;
+
+import org.apache.giraph.graph.Edge;
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.RecordReader;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+
+import java.io.IOException;
+
+/**
+ * Class to read graphs stored as adjacency lists with ids represented by
+ * Strings and values as doubles.  This is a good inputformat for reading
+ * graphs where the id types do not matter and can be stashed in a String.
+ */
+public class TextDoubleDoubleAdjacencyListVertexInputFormat<M extends Writable>
+    extends TextVertexInputFormat<Text, DoubleWritable, DoubleWritable, M>  {
+
+  static class VertexReader<M extends Writable> extends AdjacencyListVertexReader<Text,
+      DoubleWritable, DoubleWritable, M> {
+
+    VertexReader(RecordReader<LongWritable, Text> lineRecordReader) {
+      super(lineRecordReader);
+    }
+
+    VertexReader(RecordReader<LongWritable, Text> lineRecordReader,
+                 LineSanitizer sanitizer) {
+      super(lineRecordReader, sanitizer);
+    }
+
+    @Override
+    public void decodeId(String s, Text id) {
+      id.set(s);
+    }
+
+    @Override
+    public void decodeValue(String s, DoubleWritable value) {
+      value.set(Double.valueOf(s));
+    }
+
+    @Override
+    public void decodeEdge(String s1, String s2, Edge<Text, DoubleWritable>
+            textIntWritableEdge) {
+      textIntWritableEdge.setDestVertexId(new Text(s1));
+      textIntWritableEdge.setEdgeValue(new DoubleWritable(Double.valueOf(s2)));
+    }
+  }
+
+  @Override
+  public org.apache.giraph.graph.VertexReader<Text, DoubleWritable,
+    DoubleWritable, M> createVertexReader(
+      InputSplit split,
+      TaskAttemptContext context) throws IOException {
+    return new VertexReader<M>(textInputFormat.createRecordReader(
+      split, context));
+  }
+
+}
diff --git a/src/main/java/org/apache/giraph/lib/TextVertexInputFormat.java b/src/main/java/org/apache/giraph/lib/TextVertexInputFormat.java
new file mode 100644
index 0000000..62fb435
--- /dev/null
+++ b/src/main/java/org/apache/giraph/lib/TextVertexInputFormat.java
@@ -0,0 +1,127 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.lib;
+
+import org.apache.giraph.graph.VertexInputFormat;
+import org.apache.giraph.graph.VertexReader;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.RecordReader;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
+
+import java.io.IOException;
+import java.util.List;
+
+/**
+ * Abstract class that users should subclass to use their own text based
+ * vertex output format.
+ *
+ * @param <I> Vertex index value
+ * @param <V> Vertex value
+ * @param <E> Edge value
+ * @param <M> Message value
+ */
+@SuppressWarnings("rawtypes")
+public abstract class TextVertexInputFormat<
+        I extends WritableComparable,
+        V extends Writable,
+        E extends Writable,
+        M extends Writable>
+        extends VertexInputFormat<I, V, E, M> {
+    /** Uses the TextInputFormat to do everything */
+    protected TextInputFormat textInputFormat = new TextInputFormat();
+
+    /**
+     * Abstract class to be implemented by the user based on their specific
+     * vertex input.  Easiest to ignore the key value separator and only use
+     * key instead.
+     *
+     * @param <I> Vertex index value
+     * @param <V> Vertex value
+     * @param <E> Edge value
+     */
+    public static abstract class TextVertexReader<I extends WritableComparable,
+            V extends Writable, E extends Writable, M extends Writable>
+            implements VertexReader<I, V, E, M> {
+        /** Internal line record reader */
+        private final RecordReader<LongWritable, Text> lineRecordReader;
+        /** Context passed to initialize */
+        private TaskAttemptContext context;
+
+        /**
+         * Initialize with the LineRecordReader.
+         *
+         * @param lineRecordReader Line record reader from TextInputFormat
+         */
+        public TextVertexReader(
+                RecordReader<LongWritable, Text> lineRecordReader) {
+            this.lineRecordReader = lineRecordReader;
+        }
+
+        @Override
+        public void initialize(InputSplit inputSplit,
+                               TaskAttemptContext context)
+                               throws IOException, InterruptedException {
+            lineRecordReader.initialize(inputSplit, context);
+            this.context = context;
+        }
+
+        @Override
+        public void close() throws IOException {
+            lineRecordReader.close();
+        }
+
+        @Override
+        public float getProgress() throws IOException, InterruptedException {
+            return lineRecordReader.getProgress();
+        }
+
+        /**
+         * Get the line record reader.
+         *
+         * @return Record reader to be used for reading.
+         */
+        protected RecordReader<LongWritable, Text> getRecordReader() {
+            return lineRecordReader;
+        }
+
+        /**
+         * Get the context.
+         *
+         * @return Context passed to initialize.
+         */
+        protected TaskAttemptContext getContext() {
+            return context;
+        }
+    }
+
+    @Override
+    public List<InputSplit> getSplits(
+            JobContext context, int numWorkers)
+            throws IOException, InterruptedException {
+        // Ignore the hint of numWorkers here since we are using TextInputFormat
+        // to do this for us
+        return textInputFormat.getSplits(context);
+    }
+}
diff --git a/src/main/java/org/apache/giraph/lib/TextVertexOutputFormat.java b/src/main/java/org/apache/giraph/lib/TextVertexOutputFormat.java
new file mode 100644
index 0000000..a67ca3b
--- /dev/null
+++ b/src/main/java/org/apache/giraph/lib/TextVertexOutputFormat.java
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.lib;
+
+import java.io.IOException;
+
+import org.apache.giraph.graph.VertexOutputFormat;
+import org.apache.giraph.graph.VertexWriter;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.OutputCommitter;
+import org.apache.hadoop.mapreduce.RecordWriter;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
+
+/**
+ * Abstract class that users should subclass to use their own text based
+ * vertex output format.
+ *
+ * @param <I> Vertex index value
+ * @param <V> Vertex value
+ * @param <E> Edge value
+ */
+@SuppressWarnings("rawtypes")
+public abstract class TextVertexOutputFormat<
+        I extends WritableComparable, V extends Writable, E extends Writable>
+        extends VertexOutputFormat<I, V, E> {
+    /** Uses the TextOutputFormat to do everything */
+    protected TextOutputFormat<Text, Text> textOutputFormat =
+        new TextOutputFormat<Text, Text>();
+
+    /**
+     * Abstract class to be implemented by the user based on their specific
+     * vertex output.  Easiest to ignore the key value separator and only use
+     * key instead.
+     *
+     * @param <I> Vertex index value
+     * @param <V> Vertex value
+     * @param <E> Edge value
+     */
+    public static abstract class TextVertexWriter<I extends WritableComparable,
+            V extends Writable, E extends Writable>
+            implements VertexWriter<I, V, E> {
+        /** Context passed to initialize */
+        private TaskAttemptContext context;
+        /** Internal line record writer */
+        private final RecordWriter<Text, Text> lineRecordWriter;
+
+        /**
+         * Initialize with the LineRecordWriter.
+         *
+         * @param lineRecordWriter Line record writer from TextOutputFormat
+         */
+        public TextVertexWriter(RecordWriter<Text, Text> lineRecordWriter) {
+            this.lineRecordWriter = lineRecordWriter;
+        }
+
+        @Override
+        public void initialize(TaskAttemptContext context) throws IOException {
+            this.context = context;
+        }
+
+        @Override
+        public void close(TaskAttemptContext context)
+                throws IOException, InterruptedException {
+            lineRecordWriter.close(context);
+        }
+
+        /**
+         * Get the line record writer.
+         *
+         * @return Record writer to be used for writing.
+         */
+        public RecordWriter<Text, Text> getRecordWriter() {
+            return lineRecordWriter;
+        }
+
+        /**
+         * Get the context.
+         *
+         * @return Context passed to initialize.
+         */
+        public TaskAttemptContext getContext() {
+            return context;
+        }
+    }
+
+    @Override
+    public void checkOutputSpecs(JobContext context)
+            throws IOException, InterruptedException {
+        textOutputFormat.checkOutputSpecs(context);
+    }
+
+    @Override
+    public OutputCommitter getOutputCommitter(TaskAttemptContext context)
+            throws IOException, InterruptedException {
+        return textOutputFormat.getOutputCommitter(context);
+    }
+}
diff --git a/src/main/java/org/apache/giraph/utils/ComparisonUtils.java b/src/main/java/org/apache/giraph/utils/ComparisonUtils.java
new file mode 100644
index 0000000..c49171f
--- /dev/null
+++ b/src/main/java/org/apache/giraph/utils/ComparisonUtils.java
@@ -0,0 +1,48 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.utils;
+
+import java.util.Iterator;
+
+/** simple helper class for comparisons and equality checking */
+public class ComparisonUtils {
+
+    private ComparisonUtils() {
+    }
+
+    /** compare elements, sort order and length */
+    public static <T> boolean equal(Iterable<T> first, Iterable<T> second) {
+        return equal(first.iterator(), second.iterator());
+    }
+
+    /** compare elements, sort order and length */
+    public static <T> boolean equal(Iterator<T> first, Iterator<T> second) {
+        while (first.hasNext() && second.hasNext()) {
+            T message = first.next();
+            T otherMessage = second.next();
+            /* element-wise equality */
+            if (!(message == null ? otherMessage == null :
+                    message.equals(otherMessage))) {
+                return false;
+            }
+        }
+        /* length must also be equal */
+        return !(first.hasNext() || second.hasNext());
+    }
+}
diff --git a/src/main/java/org/apache/giraph/utils/EmptyIterable.java b/src/main/java/org/apache/giraph/utils/EmptyIterable.java
new file mode 100644
index 0000000..795cace
--- /dev/null
+++ b/src/main/java/org/apache/giraph/utils/EmptyIterable.java
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.utils;
+
+import java.util.Iterator;
+
+public class EmptyIterable<M> implements Iterable<M>, Iterator<M> {
+
+    @Override
+    public Iterator<M> iterator() {
+        return this;
+    }
+
+    @Override
+    public boolean hasNext() {
+        return false;
+    }
+
+    @Override
+    public M next() {
+        return null;
+    }
+
+    @Override
+    public void remove() {
+        throw new UnsupportedOperationException();
+    }
+}
+
diff --git a/src/main/java/org/apache/giraph/utils/InternalVertexRunner.java b/src/main/java/org/apache/giraph/utils/InternalVertexRunner.java
new file mode 100644
index 0000000..5db8421
--- /dev/null
+++ b/src/main/java/org/apache/giraph/utils/InternalVertexRunner.java
@@ -0,0 +1,249 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.utils;
+
+import com.google.common.base.Charsets;
+import com.google.common.io.Closeables;
+import com.google.common.io.Files;
+import org.apache.giraph.graph.GiraphJob;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
+import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
+import org.apache.zookeeper.server.ServerConfig;
+import org.apache.zookeeper.server.ZooKeeperServerMain;
+import org.apache.zookeeper.server.quorum.QuorumPeerConfig;
+
+import java.io.File;
+import java.io.FileFilter;
+import java.io.IOException;
+import java.io.Writer;
+import java.util.Map;
+import java.util.Properties;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+
+/**
+ * A base class for running internal tests on a vertex
+ *
+ * Extending classes only have to invoke the run() method to test their vertex. All data
+ * is written to a local tmp directory that is removed afterwards. A local zookeeper
+ * instance is started in an extra thread and shutdown at the end.
+ *
+ * Heavily inspired from Apache Mahout's MahoutTestCase
+ */
+public class InternalVertexRunner {
+
+    public static final int LOCAL_ZOOKEEPER_PORT = 22182;
+
+    private InternalVertexRunner() {
+    }
+
+    /**
+     *  Attempts to run the vertex internally in the current JVM, reading from and writing to a
+     *  temporary folder on local disk. Will start an own zookeeper instance.
+     *
+     * @param vertexClass the vertex class to instantiate
+     * @param vertexInputFormatClass the inputformat to use
+     * @param vertexOutputFormatClass the outputformat to use
+     * @param params a map of parameters to add to the hadoop configuration
+     * @param data linewise input data
+     * @return linewise output data
+     * @throws Exception
+     */
+    public static Iterable<String> run(Class<?> vertexClass,
+            Class<?> vertexInputFormatClass, Class<?> vertexOutputFormatClass,
+            Map<String, String> params, String... data) throws Exception {
+        return run(vertexClass, null, vertexInputFormatClass,
+                vertexOutputFormatClass, params, data);
+    }
+    
+    /**
+     *  Attempts to run the vertex internally in the current JVM, reading from and writing to a
+     *  temporary folder on local disk. Will start an own zookeeper instance.
+     *
+     * @param vertexClass the vertex class to instantiate
+     * @param vertexCombinerClass the vertex combiner to use (or null)
+     * @param vertexInputFormatClass the inputformat to use
+     * @param vertexOutputFormatClass the outputformat to use
+     * @param params a map of parameters to add to the hadoop configuration
+     * @param data linewise input data
+     * @return linewise output data
+     * @throws Exception
+     */
+    public static Iterable<String> run(Class<?> vertexClass,
+            Class<?> vertexCombinerClass, Class<?> vertexInputFormatClass, 
+            Class<?> vertexOutputFormatClass, Map<String, String> params,
+            String... data) throws Exception {
+
+        File tmpDir = null;
+        try {
+            // prepare input file, output folder and zookeeper folder
+            tmpDir = createTestDir(vertexClass);
+            File inputFile = createTempFile(tmpDir, "graph.txt");
+            File outputDir = createTempDir(tmpDir, "output");
+            File zkDir = createTempDir(tmpDir, "zooKeeper");
+
+            // write input data to disk
+            writeLines(inputFile, data);
+
+            // create and configure the job to run the vertex
+            GiraphJob job = new GiraphJob(vertexClass.getName());
+            job.setVertexClass(vertexClass);
+            job.setVertexInputFormatClass(vertexInputFormatClass);
+            job.setVertexOutputFormatClass(vertexOutputFormatClass);
+            
+            if (vertexCombinerClass != null) {
+                job.setVertexCombinerClass(vertexCombinerClass);
+            }
+
+            job.setWorkerConfiguration(1, 1, 100.0f);
+            Configuration conf = job.getConfiguration();
+            conf.setBoolean(GiraphJob.SPLIT_MASTER_WORKER, false);
+            conf.setBoolean(GiraphJob.LOCAL_TEST_MODE, true);
+            conf.set(GiraphJob.ZOOKEEPER_LIST, "localhost:" +
+                    String.valueOf(LOCAL_ZOOKEEPER_PORT));
+
+            for (Map.Entry<String,String> param : params.entrySet()) {
+                conf.set(param.getKey(), param.getValue());
+            }
+
+            FileInputFormat.addInputPath(job, new Path(inputFile.toString()));
+            FileOutputFormat.setOutputPath(job, new Path(outputDir.toString()));
+
+            // configure a local zookeeper instance
+            Properties zkProperties = new Properties();
+            zkProperties.setProperty("tickTime", "2000");
+            zkProperties.setProperty("dataDir", zkDir.getAbsolutePath());
+            zkProperties.setProperty("clientPort",
+                    String.valueOf(LOCAL_ZOOKEEPER_PORT));
+            zkProperties.setProperty("maxClientCnxns", "10000");
+            zkProperties.setProperty("minSessionTimeout", "10000");
+            zkProperties.setProperty("maxSessionTimeout", "100000");
+            zkProperties.setProperty("initLimit", "10");
+            zkProperties.setProperty("syncLimit", "5");
+            zkProperties.setProperty("snapCount", "50000");
+
+            QuorumPeerConfig qpConfig = new QuorumPeerConfig();
+            qpConfig.parseProperties(zkProperties);
+
+            // create and run the zookeeper instance
+            final InternalZooKeeper zookeeper = new InternalZooKeeper();
+            final ServerConfig zkConfig = new ServerConfig();
+            zkConfig.readFrom(qpConfig);
+
+            ExecutorService executorService = Executors.newSingleThreadExecutor();
+            executorService.execute(new Runnable() {
+                @Override
+                public void run() {
+                    try {
+                        zookeeper.runFromConfig(zkConfig);
+                    } catch (IOException e) {
+                        throw new RuntimeException(e);
+                    }
+                }
+            });
+            try {
+                job.run(true);
+            } finally {
+                executorService.shutdown();
+                zookeeper.end();
+            }
+
+            return Files.readLines(new File(outputDir, "part-m-00000"),
+                    Charsets.UTF_8);
+        } finally {
+            if (tmpDir != null) {
+                new DeletingVisitor().accept(tmpDir);
+            }
+        }
+    }
+
+    /**
+     *  Create a temporary folder that will be removed after the test
+     */
+    private static final File createTestDir(Class<?> vertexClass)
+            throws IOException {
+        String systemTmpDir = System.getProperty("java.io.tmpdir");
+        long simpleRandomLong = (long) (Long.MAX_VALUE * Math.random());
+        File testTempDir = new File(systemTmpDir, "giraph-" +
+                vertexClass.getSimpleName() + '-' + simpleRandomLong);
+        if (!testTempDir.mkdir()) {
+            throw new IOException("Could not create " + testTempDir);
+        }
+        testTempDir.deleteOnExit();
+        return testTempDir;
+    }
+
+    private static final File createTempFile(File parent, String name)
+            throws IOException {
+        return createTestTempFileOrDir(parent, name, false);
+    }
+
+    private static final File createTempDir(File parent, String name)
+            throws IOException {
+        File dir = createTestTempFileOrDir(parent, name, true);
+        dir.delete();
+        return dir;
+    }
+
+    private static File createTestTempFileOrDir(File parent, String name,
+            boolean dir) throws IOException {
+        File f = new File(parent, name);
+        f.deleteOnExit();
+        if (dir && !f.mkdirs()) {
+            throw new IOException("Could not make directory " + f);
+        }
+        return f;
+    }
+
+    private static void writeLines(File file, String... lines)
+            throws IOException {
+        Writer writer = Files.newWriter(file, Charsets.UTF_8);
+        try {
+            for (String line : lines) {
+                writer.write(line);
+                writer.write('\n');
+            }
+        } finally {
+            Closeables.closeQuietly(writer);
+        }
+    }
+
+    private static class DeletingVisitor implements FileFilter {
+        @Override
+        public boolean accept(File f) {
+            if (!f.isFile()) {
+                f.listFiles(this);
+            }
+            f.delete();
+            return false;
+        }
+    }
+
+    /**
+     * Extension of {@link ZooKeeperServerMain} that allows programmatic shutdown
+     */
+    private static class InternalZooKeeper extends ZooKeeperServerMain {
+        void end() {
+            shutdown();
+        }
+    }
+
+}
diff --git a/src/main/java/org/apache/giraph/utils/MemoryUtils.java b/src/main/java/org/apache/giraph/utils/MemoryUtils.java
new file mode 100644
index 0000000..31c30f9
--- /dev/null
+++ b/src/main/java/org/apache/giraph/utils/MemoryUtils.java
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.utils;
+
+/**
+ * Helper static methods for tracking memory usage.
+ */
+public class MemoryUtils {
+    /**
+     * Get stringified runtime memory stats
+     *
+     * @return String of all Runtime stats.
+     */
+    public static String getRuntimeMemoryStats() {
+        return "totalMem = " +
+               (Runtime.getRuntime().totalMemory() / 1024f / 1024f) +
+               "M, maxMem = "  +
+               (Runtime.getRuntime().maxMemory() / 1024f / 1024f) +
+               "M, freeMem = " +
+               (Runtime.getRuntime().freeMemory() / 1024f / 1024f)
+                + "M";
+    }
+}
diff --git a/src/main/java/org/apache/giraph/utils/ReflectionUtils.java b/src/main/java/org/apache/giraph/utils/ReflectionUtils.java
new file mode 100644
index 0000000..8652b62
--- /dev/null
+++ b/src/main/java/org/apache/giraph/utils/ReflectionUtils.java
@@ -0,0 +1,148 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.utils;
+
+import java.lang.reflect.Array;
+import java.lang.reflect.Field;
+import java.lang.reflect.GenericArrayType;
+import java.lang.reflect.ParameterizedType;
+import java.lang.reflect.Type;
+import java.lang.reflect.TypeVariable;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Helper methods to get type arguments to generic classes.  Courtesy of
+ * Ian Robertson (overstock.com).  Make sure to use with abstract
+ * generic classes, not interfaces.
+ */
+public class ReflectionUtils {
+    /**
+     * Get the underlying class for a type, or null if the type is
+     * a variable type.
+     *
+     * @param type the type
+     * @return the underlying class
+     */
+    public static Class<?> getClass(Type type) {
+        if (type instanceof Class) {
+            return (Class<?>) type;
+        }
+        else if (type instanceof ParameterizedType) {
+            return getClass(((ParameterizedType) type).getRawType());
+        }
+        else if (type instanceof GenericArrayType) {
+            Type componentType =
+                ((GenericArrayType) type).getGenericComponentType();
+            Class<?> componentClass = getClass(componentType);
+            if (componentClass != null ) {
+                return Array.newInstance(componentClass, 0).getClass();
+            }
+            else {
+                return null;
+            }
+        }
+        else {
+            return null;
+        }
+    }
+
+    /**
+     * Get the actual type arguments a child class has used to extend a
+     * generic base class.
+     *
+     * @param baseClass the base class
+     * @param childClass the child class
+     * @return a list of the raw classes for the actual type arguments.
+     */
+    public static <T> List<Class<?>> getTypeArguments(
+            Class<T> baseClass, Class<? extends T> childClass) {
+        Map<Type, Type> resolvedTypes = new HashMap<Type, Type>();
+        Type type = childClass;
+        // start walking up the inheritance hierarchy until we hit baseClass
+        while (! getClass(type).equals(baseClass)) {
+            if (type instanceof Class) {
+                // there is no useful information for us in raw types,
+                // so just keep going.
+                type = ((Class<?>) type).getGenericSuperclass();
+            }
+            else {
+                ParameterizedType parameterizedType = (ParameterizedType) type;
+                Class<?> rawType = (Class<?>) parameterizedType.getRawType();
+
+                Type[] actualTypeArguments =
+                    parameterizedType.getActualTypeArguments();
+                TypeVariable<?>[] typeParameters = rawType.getTypeParameters();
+                for (int i = 0; i < actualTypeArguments.length; i++) {
+                    resolvedTypes.put(typeParameters[i],
+                                      actualTypeArguments[i]);
+                }
+
+                if (!rawType.equals(baseClass)) {
+                    type = rawType.getGenericSuperclass();
+                }
+            }
+        }
+
+        // finally, for each actual type argument provided to baseClass,
+        // determine (if possible)
+        // the raw class for that type argument.
+        Type[] actualTypeArguments;
+        if (type instanceof Class) {
+            actualTypeArguments = ((Class<?>) type).getTypeParameters();
+        }
+        else {
+            actualTypeArguments =
+                ((ParameterizedType) type).getActualTypeArguments();
+        }
+        List<Class<?>> typeArgumentsAsClasses = new ArrayList<Class<?>>();
+        // resolve types by chasing down type variables.
+        for (Type baseType: actualTypeArguments) {
+            while (resolvedTypes.containsKey(baseType)) {
+                baseType = resolvedTypes.get(baseType);
+            }
+            typeArgumentsAsClasses.add(getClass(baseType));
+        }
+        return typeArgumentsAsClasses;
+    }
+
+    /** try to directly set a (possibly private) field on an Object */
+    public static void setField(Object target, String fieldname, Object value)
+            throws NoSuchFieldException, IllegalAccessException {
+        Field field = findDeclaredField(target.getClass(), fieldname);
+        field.setAccessible(true);
+        field.set(target, value);
+    }
+
+    /** find a declared field in a class or one of its super classes */
+    private static Field findDeclaredField(Class<?> inClass, String fieldname)
+            throws NoSuchFieldException {
+        while (!Object.class.equals(inClass)) {
+            for (Field field : inClass.getDeclaredFields()) {
+                if (field.getName().equalsIgnoreCase(fieldname)) {
+                    return field;
+                }
+            }
+            inClass = inClass.getSuperclass();
+        }
+        throw new NoSuchFieldException();
+    }
+}
diff --git a/src/main/java/org/apache/giraph/utils/UnmodifiableIntArrayIterator.java b/src/main/java/org/apache/giraph/utils/UnmodifiableIntArrayIterator.java
new file mode 100644
index 0000000..e01d242
--- /dev/null
+++ b/src/main/java/org/apache/giraph/utils/UnmodifiableIntArrayIterator.java
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.utils;
+
+import com.google.common.collect.UnmodifiableIterator;
+import org.apache.hadoop.io.IntWritable;
+
+/**
+ * {@link UnmodifiableIterator} over a primitive int array
+ */
+public class UnmodifiableIntArrayIterator
+        extends UnmodifiableIterator<IntWritable> {
+
+    private final int[] arr;
+    private int offset;
+
+    public UnmodifiableIntArrayIterator(int[] arr) {
+        this.arr = arr;
+        offset = 0;
+    }
+
+    @Override
+    public boolean hasNext() {
+        return offset < arr.length;
+    }
+
+    @Override
+    public IntWritable next() {
+        return new IntWritable(arr[offset++]);
+    }
+}
\ No newline at end of file
diff --git a/src/main/java/org/apache/giraph/utils/WritableUtils.java b/src/main/java/org/apache/giraph/utils/WritableUtils.java
new file mode 100644
index 0000000..885dde3
--- /dev/null
+++ b/src/main/java/org/apache/giraph/utils/WritableUtils.java
@@ -0,0 +1,187 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.utils;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutput;
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.giraph.zk.ZooKeeperExt;
+import org.apache.giraph.zk.ZooKeeperExt.PathStat;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.util.ReflectionUtils;
+import org.apache.zookeeper.CreateMode;
+import org.apache.zookeeper.KeeperException;
+import org.apache.zookeeper.ZooDefs.Ids;
+import org.apache.zookeeper.data.Stat;
+
+/**
+ * Helper static methods for working with Writable objects.
+ */
+public class WritableUtils {
+    public static void readFieldsFromByteArray(
+            byte[] byteArray, Writable writableObject) {
+        DataInputStream inputStream =
+            new DataInputStream(new ByteArrayInputStream(byteArray));
+        try {
+            writableObject.readFields(inputStream);
+        } catch (IOException e) {
+            throw new IllegalStateException(
+                "readFieldsFromByteArray: IOException", e);
+        }
+    }
+
+    public static void readFieldsFromZnode(ZooKeeperExt zkExt,
+                                           String zkPath,
+                                           boolean watch,
+                                           Stat stat,
+                                           Writable writableObject) {
+        try {
+            byte[] zkData = zkExt.getData(zkPath, false, stat);
+            readFieldsFromByteArray(zkData, writableObject);
+        } catch (KeeperException e) {
+            throw new IllegalStateException(
+               "readFieldsFromZnode: KeeperException on " + zkPath, e);
+        } catch (InterruptedException e) {
+            throw new IllegalStateException(
+               "readFieldsFromZnode: InterrruptedStateException on " + zkPath,
+               e);
+        }
+    }
+
+    public static byte[] writeToByteArray(Writable writableObject) {
+        ByteArrayOutputStream outputStream =
+            new ByteArrayOutputStream();
+        DataOutput output = new DataOutputStream(outputStream);
+        try {
+            writableObject.write(output);
+        } catch (IOException e) {
+            throw new IllegalStateException(
+                "writeToByteArray: IOStateException", e);
+        }
+        return outputStream.toByteArray();
+    }
+
+    public static PathStat writeToZnode(ZooKeeperExt zkExt,
+                                        String zkPath,
+                                        int version,
+                                        Writable writableObject) {
+        try {
+            byte[] byteArray = writeToByteArray(writableObject);
+            return zkExt.createOrSetExt(zkPath,
+                                        byteArray,
+                                        Ids.OPEN_ACL_UNSAFE,
+                                        CreateMode.PERSISTENT,
+                                        true,
+                                        version);
+        } catch (KeeperException e) {
+            throw new IllegalStateException(
+               "writeToZnode: KeeperException on " + zkPath, e);
+        } catch (InterruptedException e) {
+            throw new IllegalStateException(
+                "writeToZnode: InterruptedException on " + zkPath, e);
+        }
+    }
+
+    public static byte[] writeListToByteArray(
+            List<? extends Writable> writableList) {
+        ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
+        DataOutput output = new DataOutputStream(outputStream);
+        try {
+            output.writeInt(writableList.size());
+            for (Writable writable : writableList) {
+                writable.write(output);
+            }
+        } catch (IOException e) {
+            throw new IllegalStateException(
+                "writeListToByteArray: IOException", e);
+        }
+        return outputStream.toByteArray();
+    }
+
+    public static PathStat writeListToZnode(
+            ZooKeeperExt zkExt,
+            String zkPath,
+            int version,
+            List<? extends Writable> writableList) {
+        try {
+            return zkExt.createOrSetExt(
+                zkPath,
+                writeListToByteArray(writableList),
+                Ids.OPEN_ACL_UNSAFE,
+                CreateMode.PERSISTENT,
+                true,
+                version);
+        } catch (KeeperException e) {
+            throw new IllegalStateException(
+               "writeListToZnode: KeeperException on " + zkPath, e);
+        } catch (InterruptedException e) {
+            throw new IllegalStateException(
+                "writeListToZnode: InterruptedException on " + zkPath, e);
+        }
+    }
+
+    public static List<? extends Writable> readListFieldsFromByteArray(
+            byte[] byteArray,
+            Class<? extends Writable> writableClass,
+            Configuration conf) {
+        try {
+            DataInputStream inputStream =
+                new DataInputStream(new ByteArrayInputStream(byteArray));
+            int size = inputStream.readInt();
+            List<Writable> writableList = new ArrayList<Writable>(size);
+            for (int i = 0; i < size; ++i) {
+                Writable writable =
+                    ReflectionUtils.newInstance(writableClass, conf);
+                writable.readFields(inputStream);
+                writableList.add(writable);
+            }
+            return writableList;
+        } catch (IOException e) {
+            throw new IllegalStateException(
+                    "readListFieldsFromZnode: IOException", e);
+        }
+    }
+
+    public static List<? extends Writable> readListFieldsFromZnode(
+            ZooKeeperExt zkExt,
+            String zkPath,
+            boolean watch,
+            Stat stat,
+            Class<? extends Writable> writableClass,
+            Configuration conf) {
+        try {
+            byte[] zkData = zkExt.getData(zkPath, false, stat);
+            return readListFieldsFromByteArray(zkData, writableClass, conf);
+        } catch (KeeperException e) {
+            throw new IllegalStateException(
+                "readListFieldsFromZnode: KeeperException on " + zkPath, e);
+        } catch (InterruptedException e) {
+            throw new IllegalStateException(
+                "readListFieldsFromZnode: InterruptedException on " + zkPath,
+                e);
+        }
+    }
+}
diff --git a/src/main/java/org/apache/giraph/zk/BspEvent.java b/src/main/java/org/apache/giraph/zk/BspEvent.java
new file mode 100644
index 0000000..787b710
--- /dev/null
+++ b/src/main/java/org/apache/giraph/zk/BspEvent.java
@@ -0,0 +1,49 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.zk;
+
+/**
+ * Synchronize on waiting for an event to have happened.  This is a one-time
+ * event.
+ */
+public interface BspEvent {
+    /**
+     * Reset the permanent signal.
+     */
+    void reset();
+
+    /**
+     * The event occurred and the occurrence has been logged for future
+     * waiters.
+     */
+    void signal();
+
+    /**
+     * Wait until the event occurred or waiting timed out.
+     * @param msecs Milliseconds to wait until the event occurred. 0 indicates
+     *        check immediately.  -1 indicates wait forever.
+     * @return true if event occurred, false if timed out while waiting
+     */
+    boolean waitMsecs(int msecs);
+
+    /**
+     * Wait indefinitely until the event occurs.
+     */
+    void waitForever();
+}
diff --git a/src/main/java/org/apache/giraph/zk/ContextLock.java b/src/main/java/org/apache/giraph/zk/ContextLock.java
new file mode 100644
index 0000000..265ac03
--- /dev/null
+++ b/src/main/java/org/apache/giraph/zk/ContextLock.java
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.zk;
+
+import org.apache.hadoop.mapreduce.Mapper.Context;
+
+/**
+ * A lock that will keep the job context updated while waiting.
+ */
+public class ContextLock extends PredicateLock {
+    /** Job context (for progress) */
+    @SuppressWarnings("rawtypes")
+    private final Context context;
+    /** Msecs to refresh the progress meter */
+    private static final int msecPeriod = 10000;
+
+    /**
+     * Constructor.
+     *
+     * @param context used to call progress()
+     */
+    ContextLock(@SuppressWarnings("rawtypes") Context context) {
+        this.context = context;
+    }
+
+    /**
+     * Specialized version of waitForever() that will keep the job progressing
+     * while waiting.
+     */
+    @Override
+    public void waitForever() {
+        while (waitMsecs(msecPeriod) == false) {
+            context.progress();
+        }
+    }
+}
diff --git a/src/main/java/org/apache/giraph/zk/PredicateLock.java b/src/main/java/org/apache/giraph/zk/PredicateLock.java
new file mode 100644
index 0000000..f5fe27e
--- /dev/null
+++ b/src/main/java/org/apache/giraph/zk/PredicateLock.java
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.zk;
+
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.locks.Condition;
+import java.util.concurrent.locks.Lock;
+import java.util.concurrent.locks.ReentrantLock;
+
+import org.apache.log4j.Logger;
+
+/**
+ * A lock with a predicate that was be used to synchronize events.
+ */
+public class PredicateLock implements BspEvent {
+    /** Lock */
+    private Lock lock = new ReentrantLock();
+    /** Condition associated with lock */
+    private Condition cond = lock.newCondition();
+    /** Predicate */
+    private boolean eventOccurred = false;
+    /** Class logger */
+    private Logger LOG = Logger.getLogger(PredicateLock.class);
+
+    @Override
+    public void reset() {
+        lock.lock();
+        try {
+            eventOccurred = false;
+        } finally {
+            lock.unlock();
+        }
+    }
+
+    @Override
+    public void signal() {
+        lock.lock();
+        try {
+            eventOccurred = true;
+            cond.signalAll();
+        } finally {
+            lock.unlock();
+        }
+    }
+
+    @Override
+    public boolean waitMsecs(int msecs) {
+        if (msecs < -1) {
+            throw new RuntimeException("msecs < -1");
+        }
+
+        long maxMsecs = System.currentTimeMillis() + msecs;
+        long curMsecTimeout = 0;
+        lock.lock();
+        try {
+            while (eventOccurred == false) {
+                if (msecs == -1) {
+                    try {
+                        cond.await();
+                    } catch (InterruptedException e) {
+                        throw new IllegalStateException(
+                            "waitMsecs: Caught interrupted " +
+                            "exception on cond.await()", e);
+                    }
+                }
+                else {
+                    // Keep the wait non-negative
+                    curMsecTimeout =
+                        Math.max(maxMsecs - System.currentTimeMillis(), 0);
+                    if (LOG.isDebugEnabled()) {
+                        LOG.debug("waitMsecs: Wait for " + curMsecTimeout);
+                    }
+                    try {
+                        boolean signaled =
+                            cond.await(curMsecTimeout, TimeUnit.MILLISECONDS);
+                        if (LOG.isDebugEnabled()) {
+                            LOG.debug("waitMsecs: Got timed signaled of " +
+                                      signaled);
+                        }
+                    } catch (InterruptedException e) {
+                        throw new IllegalStateException(
+                            "waitMsecs: Caught interrupted " +
+                            "exception on cond.await() " +
+                            curMsecTimeout, e);
+                    }
+                    if (System.currentTimeMillis() > maxMsecs) {
+                        return false;
+                    }
+                }
+            }
+        } finally {
+            lock.unlock();
+        }
+        return true;
+    }
+
+    @Override
+    public void waitForever() {
+        waitMsecs(-1);
+    }
+}
diff --git a/src/main/java/org/apache/giraph/zk/ZooKeeperExt.java b/src/main/java/org/apache/giraph/zk/ZooKeeperExt.java
new file mode 100644
index 0000000..587108e
--- /dev/null
+++ b/src/main/java/org/apache/giraph/zk/ZooKeeperExt.java
@@ -0,0 +1,308 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.zk;
+
+import java.io.IOException;
+
+import org.apache.log4j.Logger;
+import org.apache.zookeeper.KeeperException;
+import org.apache.zookeeper.CreateMode;
+import org.apache.zookeeper.data.ACL;
+import org.apache.zookeeper.data.Stat;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.List;
+
+import org.apache.zookeeper.Watcher;
+import org.apache.zookeeper.ZooKeeper;
+
+/**
+ * ZooKeeper provides only atomic operations.  ZooKeeperExt provides additional
+ * non-atomic operations that are useful.
+ */
+public class ZooKeeperExt extends ZooKeeper {
+    /** Internal logger */
+    private static final Logger LOG = Logger.getLogger(ZooKeeperExt.class);
+    /** Length of the ZK sequence number */
+    private static final int SEQUENCE_NUMBER_LENGTH = 10;
+
+    /**
+     * Constructor to connect to ZooKeeper
+     *
+     * @param connectString Comma separated host:port pairs, each corresponding
+     *        to a zk server. e.g.
+     *        "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002" If the optional
+     *        chroot suffix is used the example would look
+     *        like: "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002/app/a"
+     *        where the client would be rooted at "/app/a" and all paths
+     *        would be relative to this root - ie getting/setting/etc...
+     *        "/foo/bar" would result in operations being run on
+     *        "/app/a/foo/bar" (from the server perspective).
+     * @param sessionTimeout Session timeout in milliseconds
+     * @param watcher A watcher object which will be notified of state changes,
+     *        may also be notified for node events
+     * @throws IOException
+     */
+    public ZooKeeperExt(String connectString,
+                        int sessionTimeout,
+                        Watcher watcher) throws IOException {
+        super(connectString, sessionTimeout, watcher);
+    }
+
+    /**
+     * Provides a possibility of a creating a path consisting of more than one
+     * znode (not atomic).  If recursive is false, operates exactly the
+     * same as create().
+     *
+     * @param path path to create
+     * @param data data to set on the final znode
+     * @param acl acls on each znode created
+     * @param createMode only affects the final znode
+     * @param recursive if true, creates all ancestors
+     * @return Actual created path
+     * @throws KeeperException
+     * @throws InterruptedException
+     */
+    public String createExt(
+            final String path,
+            byte data[],
+            List<ACL> acl,
+            CreateMode createMode,
+            boolean recursive) throws KeeperException, InterruptedException {
+        if (LOG.isDebugEnabled()) {
+            LOG.debug("createExt: Creating path " + path);
+        }
+
+        if (!recursive) {
+            return create(path, data, acl, createMode);
+        }
+
+        try {
+            return create(path, data, acl, createMode);
+        } catch (KeeperException.NoNodeException e) {
+            if (LOG.isDebugEnabled()) {
+                LOG.debug("createExt: Cannot directly create node " + path);
+            }
+        }
+
+        int pos = path.indexOf("/", 1);
+        for (; pos != -1; pos = path.indexOf("/", pos + 1)) {
+            try {
+                create(
+                    path.substring(0, pos), null, acl, CreateMode.PERSISTENT);
+            } catch (KeeperException.NodeExistsException e) {
+                if (LOG.isDebugEnabled()) {
+                    LOG.debug("createExt: Znode " + path.substring(0, pos) +
+                              " already exists");
+                }
+            }
+        }
+        return create(path, data, acl, createMode);
+    }
+
+    /**
+     * Data structure for handling the output of createOrSet()
+     */
+    public class PathStat {
+        private String path;
+        private Stat stat;
+
+        /**
+         * Put in results from createOrSet()
+         *
+         * @param path Path to created znode (or null)
+         * @param stat Stat from set znode (if set)
+         */
+        public PathStat(String path, Stat stat) {
+            this.path = path;
+            this.stat = stat;
+        }
+
+        /**
+         * Get the path of the created znode if it was created.
+         *
+         * @return Path of created znode or null if not created
+         */
+        public String getPath() {
+            return path;
+        }
+
+        /**
+         * Get the stat of the set znode if set
+         *
+         * @return Stat of set znode or null if not set
+         */
+        public Stat getStat() {
+            return stat;
+        }
+    }
+
+    /**
+     * Create a znode.  Set the znode if the created znode already exists.
+     *
+     * @param path path to create
+     * @param data data to set on the final znode
+     * @param acl acls on each znode created
+     * @param createMode only affects the final znode
+     * @param recursive if true, creates all ancestors
+     * @return Path of created znode or Stat of set znode
+     * @throws InterruptedException
+     * @throws KeeperException
+     */
+    public PathStat createOrSetExt(final String path,
+                                   byte data[],
+                                   List<ACL> acl,
+                                   CreateMode createMode,
+                                   boolean recursive,
+                                   int version)
+            throws KeeperException, InterruptedException {
+        String createdPath = null;
+        Stat setStat = null;
+        try {
+            createdPath = createExt(path, data, acl, createMode, recursive);
+        } catch (KeeperException.NodeExistsException e) {
+            if (LOG.isDebugEnabled()) {
+                LOG.debug("createOrSet: Node exists on path " + path);
+            }
+            setStat = setData(path, data, version);
+        }
+        return new PathStat(createdPath, setStat);
+    }
+
+    /**
+     * Create a znode if there is no other znode there
+     *
+     * @param path path to create
+     * @param data data to set on the final znode
+     * @param acl acls on each znode created
+     * @param createMode only affects the final znode
+     * @param recursive if true, creates all ancestors
+     * @return Path of created znode or Stat of set znode
+     * @throws InterruptedException
+     * @throws KeeperException
+     */
+    public PathStat createOnceExt(final String path,
+                                   byte data[],
+                                   List<ACL> acl,
+                                   CreateMode createMode,
+                                   boolean recursive)
+            throws KeeperException, InterruptedException {
+        String createdPath = null;
+        Stat setStat = null;
+        try {
+            createdPath = createExt(path, data, acl, createMode, recursive);
+        } catch (KeeperException.NodeExistsException e) {
+            if (LOG.isDebugEnabled()) {
+                LOG.debug("createOnceExt: Node already exists on path " + path);
+            }
+        }
+        return new PathStat(createdPath, setStat);
+    }
+
+    /**
+     * Delete a path recursively.  When the deletion is recursive, it is a
+     * non-atomic operation, hence, not part of ZooKeeper.
+     * @param path path to remove (i.e. /tmp will remove /tmp/1 and /tmp/2)
+     * @param version expected version (-1 for all)
+     * @param recursive if true, remove all children, otherwise behave like
+     *        remove()
+     * @throws InterruptedException
+     * @throws KeeperException
+     */
+    public void deleteExt(final String path, int version, boolean recursive)
+            throws InterruptedException, KeeperException {
+        if (!recursive) {
+            delete(path, version);
+            return;
+        }
+
+        try {
+            delete(path, version);
+            return;
+        } catch (KeeperException.NotEmptyException e) {
+            if (LOG.isDebugEnabled()) {
+                LOG.debug("deleteExt: Cannot directly remove node " + path);
+            }
+        }
+
+        List<String> childList = getChildren(path, false);
+        for (String child : childList) {
+            deleteExt(path + "/" + child, -1, true);
+        }
+
+        delete(path, version);
+    }
+
+    /**
+     * Get the children of the path with extensions.
+     * Extension 1: Sort the children based on sequence number
+     * Extension 2: Get the full path instead of relative path
+     *
+     * @param path path to znode
+     * @param watch set the watch?
+     * @param sequenceSorted sort by the sequence number
+     * @param fullPath if true, get the fully znode path back
+     * @return list of children
+     * @throws InterruptedException
+     * @throws KeeperException
+     */
+    public List<String> getChildrenExt(
+            final String path,
+            boolean watch,
+            boolean sequenceSorted,
+            boolean fullPath)
+            throws KeeperException, InterruptedException {
+        List<String> childList = getChildren(path, watch);
+        /* Sort children according to the sequence number, if desired */
+        if (sequenceSorted) {
+            Collections.sort(childList,
+                new Comparator<String>() {
+                    public int compare(String s1, String s2) {
+                        if ((s1.length() <= SEQUENCE_NUMBER_LENGTH) ||
+                            (s2.length() <= SEQUENCE_NUMBER_LENGTH)) {
+                            throw new RuntimeException(
+                                "getChildrenExt: Invalid length for sequence " +
+                                " sorting > " +
+                                SEQUENCE_NUMBER_LENGTH +
+                                " for s1 (" +
+                                s1.length() + ") or s2 (" + s2.length() + ")");
+                        }
+                        int s1sequenceNumber = Integer.parseInt(
+                                s1.substring(s1.length() -
+                                             SEQUENCE_NUMBER_LENGTH));
+                        int s2sequenceNumber = Integer.parseInt(
+                                s2.substring(s2.length() -
+                                             SEQUENCE_NUMBER_LENGTH));
+                        return s1sequenceNumber - s2sequenceNumber;
+                    }
+                }
+            );
+        }
+        if (fullPath) {
+            List<String> fullChildList = new ArrayList<String>();
+            for (String child : childList) {
+                fullChildList.add(path + "/" + child);
+            }
+            return fullChildList;
+        }
+        return childList;
+    }
+}
diff --git a/src/main/java/org/apache/giraph/zk/ZooKeeperManager.java b/src/main/java/org/apache/giraph/zk/ZooKeeperManager.java
new file mode 100644
index 0000000..5dc0d45
--- /dev/null
+++ b/src/main/java/org/apache/giraph/zk/ZooKeeperManager.java
@@ -0,0 +1,825 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.zk;
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.FileWriter;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.io.Writer;
+import java.net.ConnectException;
+import java.net.InetAddress;
+import java.net.InetSocketAddress;
+import java.net.Socket;
+import java.net.SocketTimeoutException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import com.google.common.io.Closeables;
+import org.apache.commons.io.FileUtils;
+import org.apache.giraph.graph.GiraphJob;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.log4j.Logger;
+import org.apache.zookeeper.server.quorum.QuorumPeerMain;
+
+/**
+ * Manages the election of ZooKeeper servers, starting/stopping the services,
+ * etc.
+ */
+public class ZooKeeperManager {
+    /** Job context (mainly for progress) */
+    private Mapper<?, ?, ?, ?>.Context context;
+    /** Hadoop configuration */
+    private final Configuration conf;
+    /** Class logger */
+    private static final Logger LOG = Logger.getLogger(ZooKeeperManager.class);
+    /** Task partition, to ensure uniqueness */
+    private final int taskPartition;
+    /** HDFS base directory for all file-based coordination */
+    private final Path baseDirectory;
+    /**
+     * HDFS task ZooKeeper candidate/completed
+     * directory for all file-based coordination
+     */
+    private final Path taskDirectory;
+    /**
+     * HDFS ZooKeeper server ready/done directory
+     * for all file-based coordination
+     */
+    private final Path serverDirectory;
+    /** HDFS path to whether the task is done */
+    private final Path myClosedPath;
+    /** Polling msecs timeout */
+    private final int pollMsecs;
+    /** Server count */
+    private final int serverCount;
+    /** File system */
+    private final FileSystem fs;
+    /** ZooKeeper process */
+    private Process zkProcess = null;
+    /** Thread that gets the zkProcess output */
+    private StreamCollector zkProcessCollector = null;
+    /** ZooKeeper local file system directory */
+    private String zkDir = null;
+    /** ZooKeeper config file path */
+    private String configFilePath = null;
+    /** ZooKeeper server list */
+    private final Map<String, Integer> zkServerPortMap = Maps.newTreeMap();
+    /** ZooKeeper base port */
+    private int zkBasePort = -1;
+    /** Final ZooKeeper server port list (for clients) */
+    private String zkServerPortString;
+    /** My hostname */
+    private String myHostname = null;
+    /** Job id, to ensure uniqueness */
+    private final String jobId;
+    /**
+     * Default local ZooKeeper prefix directory to use (where ZooKeeper server
+     * files will go)
+     */
+    private final String zkDirDefault;
+
+
+    /** Separates the hostname and task in the candidate stamp */
+    private static final String HOSTNAME_TASK_SEPARATOR = " ";
+    /** The ZooKeeperString filename prefix */
+    private static final String ZOOKEEPER_SERVER_LIST_FILE_PREFIX =
+        "zkServerList_";
+    /** Denotes that the computation is done for a partition */
+    private static final String COMPUTATION_DONE_SUFFIX = ".COMPUTATION_DONE";
+    /** State of the application */
+    public enum State {
+        FAILED,
+        FINISHED
+    }
+
+    /**
+     * Generate the final ZooKeeper coordination directory on HDFS
+     *
+     * @return directory path with job id
+     */
+    final private String getFinalZooKeeperPath() {
+        return GiraphJob.ZOOKEEPER_MANAGER_DIR_DEFAULT + "/" + jobId;
+    }
+
+    /**
+     * Collects the output of a stream and dumps it to the log.
+     */
+    private static class StreamCollector extends Thread {
+        /** Input stream to dump */
+        private final InputStream is;
+        /** Class logger */
+        private static final Logger LOG =
+            Logger.getLogger(StreamCollector.class);
+
+        /**
+         * Constructor.
+         *
+         * @param is InputStream to dump to LOG.info
+         */
+        public StreamCollector(final InputStream is) {
+            super(StreamCollector.class.getName());
+            this.is = is;
+        }
+
+        @Override
+        public void run() {
+            InputStreamReader streamReader = new InputStreamReader(is);
+            BufferedReader bufferedReader = new BufferedReader(streamReader);
+            String line;
+            try {
+                while ((line = bufferedReader.readLine()) != null) {
+                    if (LOG.isDebugEnabled()) {
+                        LOG.debug("run: " + line);
+                    }
+                }
+            } catch (IOException e) {
+                LOG.error("run: Ignoring IOException", e);
+            }
+        }
+    }
+
+    public ZooKeeperManager(Mapper<?, ?, ?, ?>.Context context)
+            throws IOException {
+        this.context = context;
+        conf = context.getConfiguration();
+        taskPartition = conf.getInt("mapred.task.partition", -1);
+        jobId = conf.get("mapred.job.id", "Unknown Job");
+        baseDirectory =
+            new Path(conf.get(GiraphJob.ZOOKEEPER_MANAGER_DIRECTORY,
+                              getFinalZooKeeperPath()));
+        taskDirectory = new Path(baseDirectory,
+                                   "_task");
+        serverDirectory = new Path(baseDirectory,
+                                    "_zkServer");
+        myClosedPath = new Path(taskDirectory,
+                                  Integer.toString(taskPartition) +
+                                  COMPUTATION_DONE_SUFFIX);
+        pollMsecs = conf.getInt(
+            GiraphJob.ZOOKEEPER_SERVERLIST_POLL_MSECS,
+            GiraphJob.ZOOKEEPER_SERVERLIST_POLL_MSECS_DEFAULT);
+        serverCount = conf.getInt(
+            GiraphJob.ZOOKEEPER_SERVER_COUNT,
+            GiraphJob.ZOOKEEPER_SERVER_COUNT_DEFAULT);
+        String jobLocalDir = conf.get("job.local.dir");
+        if (jobLocalDir != null) { // for non-local jobs
+            zkDirDefault = jobLocalDir +
+                "/_bspZooKeeper";
+        } else {
+            zkDirDefault = System.getProperty("user.dir") + "/_bspZooKeeper";
+        }
+        zkDir = conf.get(GiraphJob.ZOOKEEPER_DIR, zkDirDefault);
+        configFilePath = zkDir + "/zoo.cfg";
+        zkBasePort = conf.getInt(
+            GiraphJob.ZOOKEEPER_SERVER_PORT,
+            GiraphJob.ZOOKEEPER_SERVER_PORT_DEFAULT);
+
+
+        myHostname = InetAddress.getLocalHost().getCanonicalHostName();
+        fs = FileSystem.get(conf);
+    }
+
+    /**
+     * Create the candidate stamps and decide on the servers to start if
+     * you are partition 0.
+     *
+     * @throws IOException
+     * @throws InterruptedException
+     */
+    public void setup() throws IOException, InterruptedException {
+        createCandidateStamp();
+        getZooKeeperServerList();
+    }
+
+    /**
+     * Create a HDFS stamp for this task.  If another task already
+     * created it, then this one will fail, which is fine.
+     */
+    public void createCandidateStamp() {
+        try {
+            fs.mkdirs(baseDirectory);
+            LOG.info("createCandidateStamp: Made the directory " +
+                      baseDirectory);
+        } catch (IOException e) {
+            LOG.error("createCandidateStamp: Failed to mkdirs " +
+                      baseDirectory);
+        }
+        // Check that the base directory exists and is a directory
+        try {
+            if (!fs.getFileStatus(baseDirectory).isDir()) {
+                throw new IllegalArgumentException(
+                    "createCandidateStamp: " + baseDirectory +
+                    " is not a directory, but should be.");
+            }
+        } catch (IOException e) {
+            throw new IllegalArgumentException(
+                "createCandidateStamp: Couldn't get file status " +
+                "for base directory " + baseDirectory + ".  If there is an " +
+                "issue with this directory, please set an accesible " +
+                "base directory with the Hadoop configuration option " +
+                GiraphJob.ZOOKEEPER_MANAGER_DIRECTORY);
+        }
+
+        Path myCandidacyPath = new Path(
+            taskDirectory, myHostname +
+            HOSTNAME_TASK_SEPARATOR + taskPartition);
+        try {
+            if (LOG.isInfoEnabled()) {
+                LOG.info("createCandidateStamp: Creating my filestamp " +
+                         myCandidacyPath);
+            }
+            fs.createNewFile(myCandidacyPath);
+        } catch (IOException e) {
+            LOG.error("createCandidateStamp: Failed (maybe previous task " +
+                      "failed) to create filestamp " + myCandidacyPath, e);
+        }
+    }
+
+    /**
+     * Every task must create a stamp to let the ZooKeeper servers know that
+     * they can shutdown.  This also lets the task know that it was already
+     * completed.
+     */
+    private void createZooKeeperClosedStamp() {
+        try {
+            LOG.info("createZooKeeperClosedStamp: Creating my filestamp " +
+                     myClosedPath);
+            fs.createNewFile(myClosedPath);
+        } catch (IOException e) {
+            LOG.error("createZooKeeperClosedStamp: Failed (maybe previous task " +
+                      "failed) to create filestamp " + myClosedPath);
+        }
+    }
+
+    /**
+     * Check if all the computation is done.
+     * @return true if all computation is done.
+     */
+    public boolean computationDone() {
+        try {
+            return fs.exists(myClosedPath);
+        } catch (IOException e) {
+            throw new RuntimeException(e);
+        }
+    }
+
+    /**
+     * Task 0 will call this to create the ZooKeeper server list.  The result is
+     * a file that describes the ZooKeeper servers through the filename.
+     *
+     * @throws IOException
+     * @throws InterruptedException
+     */
+    private void createZooKeeperServerList()
+            throws IOException, InterruptedException {
+        int candidateRetrievalAttempt = 0;
+        Map<String, Integer> hostnameTaskMap = Maps.newTreeMap();
+        while (true) {
+            FileStatus [] fileStatusArray = fs.listStatus(taskDirectory);
+            hostnameTaskMap.clear();
+            if (fileStatusArray.length > 0) {
+                for (FileStatus fileStatus : fileStatusArray) {
+                    String[] hostnameTaskArray =
+                        fileStatus.getPath().getName().split(
+                            HOSTNAME_TASK_SEPARATOR);
+                    if (hostnameTaskArray.length != 2) {
+                        throw new RuntimeException(
+                            "getZooKeeperServerList: Task 0 failed " +
+                            "to parse " +
+                            fileStatus.getPath().getName());
+                    }
+                    if (!hostnameTaskMap.containsKey(hostnameTaskArray[0])) {
+                        hostnameTaskMap.put(hostnameTaskArray[0],
+                                            new Integer(hostnameTaskArray[1]));
+                    }
+                }
+                if (LOG.isInfoEnabled()) {
+                    LOG.info("getZooKeeperServerList: Got " +
+                             hostnameTaskMap.keySet() + " " +
+                             hostnameTaskMap.size() + " hosts from " +
+                             fileStatusArray.length + " candidates when " +
+                             serverCount + " required (polling period is " +
+                             pollMsecs + ") on attempt " +
+                             candidateRetrievalAttempt);
+                }
+
+                if (hostnameTaskMap.size() >= serverCount) {
+                    break;
+                }
+                ++candidateRetrievalAttempt;
+                Thread.sleep(pollMsecs);
+            }
+        }
+        StringBuffer serverListFile =
+            new StringBuffer(ZOOKEEPER_SERVER_LIST_FILE_PREFIX);
+        int numServers = 0;
+        for (Map.Entry<String, Integer> hostnameTask :
+                hostnameTaskMap.entrySet()) {
+            serverListFile.append(hostnameTask.getKey() +
+            HOSTNAME_TASK_SEPARATOR + hostnameTask.getValue() +
+            HOSTNAME_TASK_SEPARATOR);
+            if (++numServers == serverCount) {
+                break;
+            }
+        }
+        Path serverListPath =
+            new Path(baseDirectory, serverListFile.toString());
+        if (LOG.isInfoEnabled()) {
+            LOG.info("createZooKeeperServerList: Creating the final " +
+                     "ZooKeeper file '" + serverListPath + "'");
+        }
+        fs.createNewFile(serverListPath);
+    }
+
+    /**
+     * Make an attempt to get the server list file by looking for a file in
+     * the appropriate directory with the prefix
+     * ZOOKEEPER_SERVER_LIST_FILE_PREFIX.
+     * @return null if not found or the filename if found
+     * @throws IOException
+     */
+    private String getServerListFile() throws IOException {
+        String serverListFile = null;
+        FileStatus [] fileStatusArray = fs.listStatus(baseDirectory);
+        for (FileStatus fileStatus : fileStatusArray) {
+            if (fileStatus.getPath().getName().startsWith(
+                    ZOOKEEPER_SERVER_LIST_FILE_PREFIX)) {
+                serverListFile = fileStatus.getPath().getName();
+                break;
+            }
+        }
+        return serverListFile;
+    }
+
+    /**
+     * Task 0 is the designated master and will generate the server list
+     * (unless it has already done so).  Other
+     * tasks will consume the file after it is created (just the filename).
+     * @throws IOException
+     * @throws InterruptedException
+     */
+    private void getZooKeeperServerList()
+            throws IOException, InterruptedException {
+        String serverListFile;
+
+        if (taskPartition == 0) {
+            serverListFile = getServerListFile();
+            if (serverListFile == null) {
+                createZooKeeperServerList();
+            }
+        }
+
+        while (true) {
+            serverListFile = getServerListFile();
+            if (LOG.isInfoEnabled()) {
+                LOG.info("getZooKeeperServerList: For task " + taskPartition +
+                         ", got file '" + serverListFile +
+                         "' (polling period is " +
+                         pollMsecs + ")");
+            }
+            if (serverListFile != null) {
+                break;
+            }
+            try {
+                Thread.sleep(pollMsecs);
+            } catch (InterruptedException e) {
+                LOG.warn("getZooKeeperServerList: Strange interrupted " +
+                         "exception " + e.getMessage());
+            }
+
+        }
+
+        List<String> serverHostList = Arrays.asList(serverListFile.substring(
+            ZOOKEEPER_SERVER_LIST_FILE_PREFIX.length()).split(
+                HOSTNAME_TASK_SEPARATOR));
+        if (LOG.isInfoEnabled()) {
+            LOG.info("getZooKeeperServerList: Found " + serverHostList + " " +
+                     serverHostList.size() +
+                     " hosts in filename '" + serverListFile + "'");
+        }
+        if (serverHostList.size() != serverCount * 2) {
+            throw new IllegalStateException(
+                "getZooKeeperServerList: Impossible " +
+                " that " + serverHostList.size() +
+                " != 2 * " +
+                serverCount + " asked for.");
+        }
+
+        for (int i = 0; i < serverHostList.size(); i += 2) {
+            zkServerPortMap.put(serverHostList.get(i),
+                                  Integer.parseInt(serverHostList.get(i+1)));
+        }
+        zkServerPortString = "";
+        for (String server : zkServerPortMap.keySet()) {
+            if (zkServerPortString.length() > 0) {
+                zkServerPortString += ",";
+            }
+            zkServerPortString += server + ":" + zkBasePort;
+        }
+    }
+
+    /**
+     * Users can get the server port string to connect to ZooKeeper
+     * @return server port string - comma separated
+     */
+    public String getZooKeeperServerPortString() {
+        return zkServerPortString;
+    }
+
+    /**
+     * Whoever is elected to be a ZooKeeper server must generate a config file
+     * locally.
+     */
+    private void generateZooKeeperConfigFile(List<String> serverList) {
+        if (LOG.isInfoEnabled()) {
+            LOG.info("generateZooKeeperConfigFile: Creating file " +
+                     configFilePath + " in " + zkDir + " with base port " +
+                     zkBasePort);
+        }
+        try {
+            File zkDirFile = new File(this.zkDir);
+            boolean mkDirRet = zkDirFile.mkdirs();
+            if (LOG.isInfoEnabled()) {
+                LOG.info("generateZooKeeperConfigFile: Make directory of " +
+                         zkDirFile.getName() + " = " + mkDirRet);
+            }
+            File configFile = new File(configFilePath);
+            boolean deletedRet = configFile.delete();
+            if (LOG.isInfoEnabled()) {
+                LOG.info("generateZooKeeperConfigFile: Delete of " +
+                         configFile.getName() + " = " + deletedRet);
+            }
+            if (!configFile.createNewFile()) {
+                throw new IllegalStateException(
+                    "generateZooKeeperConfigFile: Failed to " +
+                    "create config file " + configFile.getName());
+            }
+            // Make writable by everybody
+            if (!configFile.setWritable(true, false)) {
+                throw new IllegalStateException(
+                    "generateZooKeeperConfigFile: Failed to make writable " +
+                    configFile.getName());
+            }
+            
+            Writer writer = null;
+            try {
+                writer = new FileWriter(configFilePath);
+                writer.write("tickTime=" +
+                             GiraphJob.DEFAULT_ZOOKEEPER_TICK_TIME + "\n");
+                writer.write("dataDir=" + this.zkDir + "\n");
+                writer.write("clientPort=" + zkBasePort + "\n");
+                writer.write("maxClientCnxns=" +
+                        GiraphJob.DEFAULT_ZOOKEEPER_MAX_CLIENT_CNXNS +
+                        "\n");
+                writer.write("minSessionTimeout=" +
+                        GiraphJob.DEFAULT_ZOOKEEPER_MIN_SESSION_TIMEOUT +
+                        "\n");
+                writer.write("maxSessionTimeout=" +
+                        GiraphJob.DEFAULT_ZOOKEEPER_MAX_SESSION_TIMEOUT +
+                        "\n");
+                writer.write("initLimit=" +
+                        GiraphJob.DEFAULT_ZOOKEEPER_INIT_LIMIT + "\n");
+                writer.write("syncLimit=" +
+                        GiraphJob.DEFAULT_ZOOKEEPER_SYNC_LIMIT + "\n");
+                writer.write("snapCount=" +
+                        GiraphJob.DEFAULT_ZOOKEEPER_SNAP_COUNT + "\n");
+                if (serverList.size() != 1) {
+                    writer.write("electionAlg=0\n");
+                    for (int i = 0; i < serverList.size(); ++i) {
+                        writer.write("server." + i + "=" + serverList.get(i) +
+                                     ":" + (zkBasePort + 1) +
+                                     ":" + (zkBasePort + 2) + "\n");
+                        if (myHostname.equals(serverList.get(i))) {
+                            Writer myidWriter = null;
+                            try {
+                                myidWriter = new FileWriter(zkDir + "/myid");
+                                myidWriter.write(i + "\n");
+                            } finally {
+                                Closeables.closeQuietly(myidWriter);
+                            }
+                        }
+                    }
+                }
+            } finally {
+                Closeables.closeQuietly(writer);
+            }
+        } catch (IOException e) {
+            throw new IllegalStateException(
+                "generateZooKeeperConfigFile: Failed to write file", e);
+        }
+    }
+
+    /**
+     * If this task has been selected, online a ZooKeeper server.  Otherwise,
+     * wait until this task knows that the ZooKeeper servers have been onlined.
+     */
+    public void onlineZooKeeperServers() {
+        Integer taskId = zkServerPortMap.get(myHostname);
+        if ((taskId != null) && (taskId.intValue() == taskPartition)) {
+            File zkDirFile = new File(this.zkDir);
+            try {
+                if (LOG.isInfoEnabled()) {
+                    LOG.info("onlineZooKeeperServers: Trying to delete old " +
+                             "directory " + this.zkDir);
+                }
+                FileUtils.deleteDirectory(zkDirFile);
+            } catch (IOException e) {
+                LOG.warn("onlineZooKeeperServers: Failed to delete " +
+                         "directory " + this.zkDir, e);
+            }
+            generateZooKeeperConfigFile(
+                new ArrayList<String>(zkServerPortMap.keySet()));
+            ProcessBuilder processBuilder = new ProcessBuilder();
+            List<String> commandList = Lists.newArrayList();
+            String javaHome = System.getProperty("java.home");
+            if (javaHome == null) {
+                throw new IllegalArgumentException(
+                    "onlineZooKeeperServers: java.home is not set!");
+            }
+            commandList.add(javaHome + "/bin/java");
+            String zkJavaOptsString =
+                conf.get(GiraphJob.ZOOKEEPER_JAVA_OPTS,
+                         GiraphJob.ZOOKEEPER_JAVA_OPTS_DEFAULT);
+            String[] zkJavaOptsArray = zkJavaOptsString.split(" ");
+            if (zkJavaOptsArray != null) {
+                for (String javaOpt : zkJavaOptsArray) {
+                    commandList.add(javaOpt);
+                }
+            }
+            commandList.add("-cp");
+            Path fullJarPath = new Path(conf.get(GiraphJob.ZOOKEEPER_JAR));
+            commandList.add(fullJarPath.toString());
+            commandList.add(QuorumPeerMain.class.getName());
+            commandList.add(configFilePath);
+            processBuilder.command(commandList);
+            File execDirectory = new File(zkDir);
+            processBuilder.directory(execDirectory);
+            processBuilder.redirectErrorStream(true);
+            if (LOG.isInfoEnabled()) {
+                LOG.info("onlineZooKeeperServers: Attempting to " +
+                         "start ZooKeeper server with command " + commandList +
+                         " in directory " + execDirectory.toString());
+            }
+            try {
+                synchronized (this) {
+                    zkProcess = processBuilder.start();
+                    zkProcessCollector =
+                        new StreamCollector(zkProcess.getInputStream());
+                    zkProcessCollector.start();
+                }
+                Runnable runnable = new Runnable() {
+                    public void run() {
+                        synchronized (this) {
+                            if (zkProcess != null) {
+                                LOG.warn("onlineZooKeeperServers: "+
+                                    "Forced a shutdown hook kill of the " +
+                                    "ZooKeeper process.");
+                                zkProcess.destroy();
+                            }
+                        }
+                    }
+                };
+                Runtime.getRuntime().addShutdownHook(new Thread(runnable));
+            } catch (IOException e) {
+                LOG.error("onlineZooKeeperServers: Failed to start " +
+                          "ZooKeeper process", e);
+                throw new RuntimeException(e);
+            }
+
+            // Once the server is up and running, notify that this server is up
+            // and running by dropping a ready stamp.
+            int connectAttempts = 0;
+            final int maxConnectAttempts = 10;
+            while (connectAttempts < maxConnectAttempts) {
+                try {
+                    if (LOG.isInfoEnabled()) {
+                        LOG.info("onlineZooKeeperServers: Connect attempt " +
+                                 connectAttempts + " of " +
+                                 maxConnectAttempts +
+                                 " max trying to connect to " +
+                                 myHostname + ":" + zkBasePort +
+                                 " with poll msecs = " + pollMsecs);
+                    }
+                    InetSocketAddress zkServerAddress =
+                        new InetSocketAddress(myHostname, zkBasePort);
+                    Socket testServerSock = new Socket();
+                    testServerSock.connect(zkServerAddress, 5000);
+                    if (LOG.isInfoEnabled()) {
+                        LOG.info("onlineZooKeeperServers: Connected to " +
+                                 zkServerAddress + "!");
+                    }
+                    break;
+                } catch (SocketTimeoutException e) {
+                    LOG.warn("onlineZooKeeperServers: Got " +
+                             "SocketTimeoutException", e);
+                } catch (ConnectException e) {
+                    LOG.warn("onlineZooKeeperServers: Got " +
+                             "ConnectException", e);
+                } catch (IOException e) {
+                    LOG.warn("onlineZooKeeperServers: Got " +
+                             "IOException", e);
+                }
+
+                ++connectAttempts;
+                try {
+                    Thread.sleep(pollMsecs);
+                } catch (InterruptedException e) {
+                    LOG.warn("onlineZooKeeperServers: Sleep of " + pollMsecs +
+                             " interrupted - " + e.getMessage());
+                }
+            }
+            if (connectAttempts == maxConnectAttempts) {
+                throw new IllegalStateException(
+                    "onlineZooKeeperServers: Failed to connect in " +
+                    connectAttempts + " tries!");
+            }
+            Path myReadyPath = new Path(
+                    serverDirectory, myHostname +
+                    HOSTNAME_TASK_SEPARATOR + taskPartition);
+            try {
+                if (LOG.isInfoEnabled()) {
+                    LOG.info("onlineZooKeeperServers: Creating my filestamp " +
+                             myReadyPath);
+                }
+                fs.createNewFile(myReadyPath);
+            } catch (IOException e) {
+                LOG.error("onlineZooKeeperServers: Failed (maybe previous " +
+                          "task failed) to create filestamp " + myReadyPath, e);
+            }
+        }
+        else {
+            List<String> foundList = new ArrayList<String>();
+            int readyRetrievalAttempt = 0;
+            while (true) {
+                try {
+                    FileStatus [] fileStatusArray =
+                        fs.listStatus(serverDirectory);
+                    foundList.clear();
+                    if ((fileStatusArray != null) &&
+                        (fileStatusArray.length > 0)) {
+                        for (int i = 0; i < fileStatusArray.length; ++i) {
+                            String[] hostnameTaskArray =
+                                fileStatusArray[i].getPath().getName().split(
+                                    HOSTNAME_TASK_SEPARATOR);
+                            if (hostnameTaskArray.length != 2) {
+                                throw new RuntimeException(
+                                    "getZooKeeperServerList: Task 0 failed " +
+                                    "to parse " +
+                                    fileStatusArray[i].getPath().getName());
+                            }
+                            foundList.add(hostnameTaskArray[0]);
+                        }
+                        if (LOG.isInfoEnabled()) {
+                            LOG.info("onlineZooKeeperServers: Got " +
+                                     foundList + " " +
+                                     foundList.size() + " hosts from " +
+                                     fileStatusArray.length +
+                                     " ready servers when " +
+                                     serverCount +
+                                     " required (polling period is " +
+                                     pollMsecs + ") on attempt " +
+                                     readyRetrievalAttempt);
+                        }
+                        if (foundList.containsAll(zkServerPortMap.keySet())) {
+                            break;
+                        }
+                    } else {
+                        if (LOG.isInfoEnabled()) {
+                            LOG.info("onlineZooKeeperSErvers: Empty " +
+                                     "directory " + serverDirectory +
+                                     ", waiting " + pollMsecs + " msecs.");
+                        }
+                    }
+                    Thread.sleep(pollMsecs);
+                    ++readyRetrievalAttempt;
+                } catch (IOException e) {
+                    throw new RuntimeException(e);
+                } catch (InterruptedException e) {
+                    LOG.warn("onlineZooKeeperServers: Strange interrupt from " +
+                             e.getMessage(), e);
+                }
+            }
+        }
+    }
+
+    /**
+     * Wait for all map tasks to signal completion.
+     *
+     * @param totalMapTasks Number of map tasks to wait for
+     */
+    private void waitUntilAllTasksDone(int totalMapTasks) {
+        int attempt = 0;
+        while (true) {
+            try {
+                FileStatus [] fileStatusArray =
+                    fs.listStatus(taskDirectory);
+                int totalDone = 0;
+                if (fileStatusArray.length > 0) {
+                    for (int i = 0; i < fileStatusArray.length; ++i) {
+                        if (fileStatusArray[i].getPath().getName().endsWith(
+                            COMPUTATION_DONE_SUFFIX)) {
+                            ++totalDone;
+                        }
+                    }
+                }
+                if (LOG.isInfoEnabled()) {
+                    LOG.info("waitUntilAllTasksDone: Got " + totalDone +
+                             " and " + totalMapTasks +
+                             " desired (polling period is " +
+                             pollMsecs + ") on attempt " +
+                             attempt);
+                }
+                if (totalDone >= totalMapTasks) {
+                    break;
+                }
+                ++attempt;
+                Thread.sleep(pollMsecs);
+               context.progress();
+            } catch (IOException e) {
+                LOG.warn("waitUntilAllTasksDone: Got IOException.", e);
+            } catch (InterruptedException e) {
+                LOG.warn("waitUntilAllTasksDone: Got InterruptedException", e);
+            }
+        }
+    }
+
+    /**
+     * Notify the ZooKeeper servers that this partition is done with all
+     * ZooKeeper communication.  If this task is running a ZooKeeper server,
+     * kill it when all partitions are done and wait for
+     * completion.  Clean up the ZooKeeper local directory as well.
+     *
+     * @param state State of the application
+     */
+    public void offlineZooKeeperServers(State state) {
+        if (state == State.FINISHED) {
+            createZooKeeperClosedStamp();
+        }
+        synchronized (this) {
+            if (zkProcess != null) {
+                int totalMapTasks = conf.getInt("mapred.map.tasks", -1);
+                waitUntilAllTasksDone(totalMapTasks);
+                zkProcess.destroy();
+                int exitValue = -1;
+                File zkDirFile;
+                try {
+                    zkProcessCollector.join();
+                    exitValue = zkProcess.waitFor();
+                    zkDirFile = new File(zkDir);
+                    FileUtils.deleteDirectory(zkDirFile);
+                } catch (InterruptedException e) {
+                    LOG.warn("offlineZooKeeperServers: " +
+                             "InterruptedException, but continuing ",
+                             e);
+                } catch (IOException e) {
+                    LOG.warn("offlineZooKeeperSevers: " +
+                             "IOException, but continuing",
+                             e);
+                }
+                if (LOG.isInfoEnabled()) {
+                    LOG.info("offlineZooKeeperServers: waitFor returned " +
+                             exitValue + " and deleted directory " + zkDir);
+                }
+                zkProcess = null;
+            }
+        }
+    }
+
+    /**
+     *  Is this task running a ZooKeeper server?  Only could be true if called
+     *  after onlineZooKeeperServers().
+     *
+     *  @return true if running a ZooKeeper server, false otherwise
+     */
+    public boolean runsZooKeeper() {
+        synchronized (this) {
+            return zkProcess != null;
+        }
+    }
+}
diff --git a/src/site/site.xml b/src/site/site.xml
new file mode 100644
index 0000000..2b6da27
--- /dev/null
+++ b/src/site/site.xml
@@ -0,0 +1,70 @@
+<?xml version="1.0" encoding="ISO-8859-1"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<project name="Giraph">
+  <bannerRight>
+    <src>http://incubator.apache.org/images/apache-incubator-logo.png</src>
+    <href>http://incubator.apache.org/</href>
+  </bannerRight>
+
+  <publishDate position="right"/>
+  <version position="right"/>
+
+  <body>
+    <links position="left">
+      <item name="Wiki" href="https://cwiki.apache.org/confluence/display/GIRAPH" />
+      <item name="JIRA" href="https://issues.apache.org/jira/browse/GIRAPH" />
+      <item name="SVN" href="https://svn.apache.org/repos/asf/incubator/giraph/" />
+    </links>
+
+    <breadcrumbs position="left">
+      <item name="Apache" href="http://www.apache.org/" />
+      <item name="Apache Incubator" href="http://incubator.apache.org/" />
+      <item name="Giraph" href="http://incubator.apache.org/giraph/"/>
+    </breadcrumbs>
+    
+    <menu name="Giraph">
+      <item name="About" href="http://incubator.apache.org/giraph/index.html"/>
+      <item name="Wiki" href="https://cwiki.apache.org/confluence/display/GIRAPH" />
+    </menu>
+
+    <menu name="Project Information" inherit="top">
+      <item name="Summary" href="http://incubator.apache.org/giraph/project-summary.html" />
+      <item name="Team" href="http://incubator.apache.org/giraph/team-list.html" />
+      <item name="Mailing Lists" href="http://incubator.apache.org/giraph/mail-lists.html" />
+      <item name="License" href="http://www.apache.org/licenses/" />
+      <item name="Issue Tracking" href="http://incubator.apache.org/giraph/issue-tracking.html" />
+      <item name="Source Repository" href="http://incubator.apache.org/giraph/source-repository.html" />
+      <item name="Dependencies" href="http://incubator.apache.org/giraph/dependencies.html" />
+      <item name="Reports" href="http://incubator.apache.org/giraph/project-reports.html" collapse="true">
+        <item name="Surefire Report" href="http://incubator.apache.org/giraph/surefire-report.html" />
+        <item name="Checkstyle Results" href="http://incubator.apache.org/giraph/checkstyle.html" />
+        <item name="Jdepend" href="http://incubator.apache.org/giraph/jdepend-report.html" />
+        <item name="Cobertura Test Coverage" href="http://incubator.apache.org/giraph/cobertura/index.html" />
+        <item name="Tag List" href="http://incubator.apache.org/giraph/taglist.html" />
+        <item name="Source Xref" href="http://incubator.apache.org/giraph/xref/index.html" />
+        <item name="Test Source Xref" href="http://incubator.apache.org/giraph/xref-test/index.html" />
+      </item>
+    </menu>
+    
+    <menu name="Documentation">
+      <item name="Quick Start Guide" href="https://cwiki.apache.org/confluence/display/GIRAPH/Quick+Start+Guide"/>
+      <item name="Shortest Paths Example" href="https://cwiki.apache.org/confluence/display/GIRAPH/Shortest+Paths+Example"/>
+      <item name="Javadoc" href="apidocs/index.html"/>
+    </menu> 
+  </body>
+</project>
diff --git a/src/site/xdoc/index.xml b/src/site/xdoc/index.xml
new file mode 100644
index 0000000..9585b83
--- /dev/null
+++ b/src/site/xdoc/index.xml
@@ -0,0 +1,100 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<document xmlns="http://maven.apache.org/XDOC/2.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
+  <properties>
+    <title>Welcome To Apache Incubator Giraph</title>
+  </properties>
+
+  <body>
+    <section name="Welcome To Apache Incubator Giraph">
+        <p>Web and online social graphs have been rapidly growing in size and scale during the past decade. In 2008, Google estimated that the number of web pages reached over a trillion. Online social networking and email sites, including Yahoo!, Google, Microsoft, Facebook, LinkedIn, and Twitter, have hundreds of millions of users and are expected to grow much more in the future. Processing these graphs plays a big role in relevant and personalized information for users, such as results from a search engine or news in an online social networking site.</p>
+
+        <p>Graph processing platforms to run large-scale algorithms (such as page rank, shared connections, personalization-based popularity, etc.) have become quite popular. Some recent examples include Pregel and HaLoop. For general-purpose big data computation, the map-reduce computing model has been well adopted and the most deployed map-reduce infrastructure is Apache Hadoop. We have implemented a graph-processing framework that is launched as a typical Hadoop job to leverage existing Hadoop infrastructure, such as Amazon's EC2. Giraph builds upon the graph-oriented nature of Pregel but additionally adds fault-tolerance to the coordinator process with the use of ZooKeeper as its centralized coordination service.</p>
+
+        <p>Giraph follows the bulk-synchronous parallel model relative to graphs where vertices can send messages to other vertices during a given superstep. Checkpoints are initiated by the Giraph infrastructure at user-defined intervals and are used for automatic application restarts when any worker in the application fails. Any worker in the application can act as the application coordinator and one will automatically take over if the current application coordinator fails.</p>
+
+    </section>
+    <section name="Incubator disclaimer">
+	<p>Apache Giraph is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the name of Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.</p>
+	</section>
+	<section name="Presentations">
+      We're working hard to build a community of users and developers around Giraph. As part of that outreach, we're giving presentations and talks to help bring people up to speed.
+      <ul>
+          <li>Avery Ching introduced Giraph at Hadoop Summit 2011. <a href="http://www.youtube.com/watch?v=l4nQjAG6fac">Watch the video</a>.</li>
+          <li>An updated slidedeck of the Hadoop Summit talk was presented at HortonWorks. <a href="http://www.slideshare.net/averyching/20111014hortonworks">Read the slides</a>.</li>
+      </ul>
+    </section>
+	<section name="Supported versions of Apache Hadoop">
+	<p>	Hadoop versions for use with Giraph:
+		<ul>
+		<li>Secure Hadoop versions: Apache Hadoop 0.20.203, 0.20.204, other secure versions may work as well</li>
+		<li>Unsecure Hadoop versions: Apache Hadoop 0.20.1, 0.20.2, 0.20.3. While we provide support for unsecure Hadoop with the maven profile 'hadoop_non_secure', we have been primarily focusing on secure Hadoop releases at this time.</li>
+		<li>Other distributions that included Apache Hadoop reported to work include: Cloudera CDH3u0, CDH3u1</li>
+	</ul>
+	</p>
+	</section>
+	<section name="Getting involved">
+		<p>Giraph is a new project and we're looking to quickly build a community of users and contributors. All types of help is appreciated: contributing patches, writing documentation, posing and answering questions on the mailing list, even <a href="https://issues.apache.org/jira/browse/GIRAPH-4">graphic design</a>. Here's how to get involved with Giraph (or any Apache project):</p>
+		<ul>
+		<li>Subscribe to the <a href="mail-lists.html">mailing lists</a>, particularly the user and dev list, and follow their activity for a while to get a feel for the state of the project and what the community is working on.</li>
+		<li>Browse through <a href="issue-tracking.html">Giraph's JIRA</a>, our issue tracking system, to find issues you may be interested in working on. To help new contributors pitch in quickly, we maintain a <a href="http://bit.ly/newbie_apache_giraph_issues">set of JIRAs</a> that focus on getting new contributors started with the mechanics of generating a patch &#151; downloading the source, changing a couple lines, creating a patch, verifying its correctness, uploading it to JIRA and working with the community &#151; rather that deep technical issues within Giraph itself. These are good issues with which to join the community. See <a href="#Generatingpatches">below</a> for detailed instructions on creating patches.</li>
+		<li>Try out the examples and play with Giraph on your cluster. Be sure to ask questions on the mailing list or open new JIRAs if you run into issues with your particular configuration.</li>
+	</ul>
+	</section>
+	<section name="Building and testing">
+		<p>You will need the following:</p>
+		<ul>
+		<li>Java 1.6</li>
+		<li>Maven 3 or higher. Giraph uses the <a href="http://sonatype.github.com/munge-maven-plugin/">munge plugin</a>, which requires Maven 3, to support multiple versions of Hadoop. Also, the web site plugin requires Maven 3.</li>
+	</ul>
+
+		<p>Use the maven commands with secure Hadoop to:
+		<ul>
+		<li>compile (i.e <tt>mvn compile</tt>)</li>
+		<li>package (i.e. <tt>mvn package</tt>)</li>
+		<li>test (i.e. <tt>mvn test</tt>) For testing, one can submit the test to a running Hadoop instance (i.e. <tt>mvn test -Dprop.mapred.job.tracker=localhost:50300</tt>)</li>
+	    </ul>
+		For the non-secure versions of Hadoop, run the maven commands with the
+		additional argument <tt>-Dhadoop=non_secure</tt> to enable the maven profile
+		 <tt>hadoop_non_secure</tt>.  An example compilation command is
+		<tt>mvn -Dhadoop=non_secure compile</tt>.</p>
+	</section>
+  <section name="Notes">
+      Counter limit: In Hadoop 0.20.203.0 onwards, there is a limit on the number of counters one can use, which is set to 120 by default. This limit restricts the number of iterations/supersteps possible in Giraph. This limit can be increased by setting a parameter <tt>mapreduce.job.counters.limit</tt> in job tracker's config file mapred-site.xml.
+  </section>
+  <section name="Generating patches" id="Generatingpatches">
+	<p>Follow these steps to generate a patch that can be attached to a JIRA issue for review.</p>
+    <ul>
+	   <li>Check out the Giraph source, either from the <a href="source-repository.html">subversion repository</a> or from a <a href="http://git.apache.org/">git mirror</a>. Note that the git mirrors may lag slightly behind the subversion repos.</li>
+	   <li>Make the changes necessary for your particular issue. Try to avoid unnecessary changes, such as extra whitespace or formatting changes. Include a unit test, or be ready to justify in the JIRA why one isn't necessary.</li>
+	   <li>Verify the new and existing tests continue to pass via <tt>mvn test</tt>. Verify the change works as expected on a real cluster, if possible. If one's not available for testing, mention it on the JIRA so another contributor can verify.</li>
+	   <li>Verify that RAT is ok with the changes that you've made via <tt>mvn rat:check</tt>. Also check that the patch follows Giraph's style guidelines (found in the source root in <tt>CODE_CONVENTIONS</tt>).</li>
+	   <li>Generate a patch either by <tt>svn diff > GIRAPH-{ISSUE-NUMBER}.patch</tt> or <tt>git diff --no-prefix trunk > GIRAPH-{ISSUE_NUMBER}.patch</tt> (the <tt>--no-prefix</tt> option is necessary to make the patch compatible with Apache's subversion repository). For subsequent patches, if necessary, number each version to make it easier for reviewers to track their progress.</li>
+		<li>Attach the patch to the JIRA issue (click <em>More Actions</em> and then <em>Attach File</em> from the top menu) using the comment to briefly explain what changes it contains and what testing was done. Mark the JIRA as <em>Patch Available</em> to let reviewers know it's ripe for evaluation.</li>
+		<li>Optionally, you can open <a href="https://reviews.apache.org/">reviewboard</a> request for the patch, although not all reviewers use this tool.</li>
+	</ul>
+	<p>A committer should review the patch shortly and either provide feedback for a new version, or commit it to the Giraph source.</p>
+	</section>
+  </body>
+</document>
diff --git a/src/test/java/org/apache/giraph/BspCase.java b/src/test/java/org/apache/giraph/BspCase.java
new file mode 100644
index 0000000..3c2ef19
--- /dev/null
+++ b/src/test/java/org/apache/giraph/BspCase.java
@@ -0,0 +1,249 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph;
+
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
+import org.apache.zookeeper.WatchedEvent;
+import org.apache.zookeeper.Watcher;
+
+import org.apache.giraph.examples.GeneratedVertexReader;
+import org.apache.giraph.graph.GiraphJob;
+import org.apache.giraph.zk.ZooKeeperExt;
+
+import junit.framework.TestCase;
+
+/**
+ * Extended TestCase for making setting up Bsp testing.
+ */
+public class BspCase extends TestCase implements Watcher {
+    /** JobTracker system property */
+    private final String jobTracker =
+        System.getProperty("prop.mapred.job.tracker");
+    /** Jar location system property */
+    private final String jarLocation =
+        System.getProperty("prop.jarLocation", "");
+    /** Number of actual processes for the BSP application */
+    private int numWorkers = 1;
+    /** ZooKeeper list system property */
+    private final String zkList = System.getProperty("prop.zookeeper.list");
+
+    /**
+     * Adjust the configuration to the basic test case
+     */
+    public final void setupConfiguration(GiraphJob job) {
+        Configuration conf = job.getConfiguration();
+        conf.set("mapred.jar", getJarLocation());
+
+        // Allow this test to be run on a real Hadoop setup
+        if (getJobTracker() != null) {
+            System.out.println("setup: Sending job to job tracker " +
+                       getJobTracker() + " with jar path " + getJarLocation()
+                       + " for " + getName());
+            conf.set("mapred.job.tracker", getJobTracker());
+            job.setWorkerConfiguration(getNumWorkers(),
+                                       getNumWorkers(),
+                                       100.0f);
+        }
+        else {
+            System.out.println("setup: Using local job runner with " +
+                               "location " + getJarLocation() + " for "
+                               + getName());
+            job.setWorkerConfiguration(1, 1, 100.0f);
+            // Single node testing
+            conf.setBoolean(GiraphJob.SPLIT_MASTER_WORKER, false);
+        }
+        conf.setInt(GiraphJob.POLL_ATTEMPTS, 10);
+        conf.setInt(GiraphJob.POLL_MSECS, 3*1000);
+        conf.setInt(GiraphJob.ZOOKEEPER_SERVERLIST_POLL_MSECS, 500);
+        if (getZooKeeperList() != null) {
+            job.setZooKeeperConfiguration(getZooKeeperList());
+        }
+        // GeneratedInputSplit will generate 5 vertices
+        conf.setLong(GeneratedVertexReader.READER_VERTICES, 5);
+    }
+
+    /**
+     * Create the test case
+     *
+     * @param testName name of the test case
+     */
+    public BspCase(String testName) {
+        super(testName);
+
+    }
+
+    /**
+     * Get the number of workers used in the BSP application
+     *
+     * @param numProcs number of processes to use
+     */
+    public int getNumWorkers() {
+        return numWorkers;
+    }
+
+    /**
+     * Get the ZooKeeper list
+     */
+    public String getZooKeeperList() {
+        return zkList;
+    }
+
+    /**
+     * Get the jar location
+     *
+     * @return location of the jar file
+     */
+    String getJarLocation() {
+        return jarLocation;
+    }
+
+    /**
+     * Get the job tracker location
+     *
+     * @return job tracker location as a string
+     */
+    String getJobTracker() {
+        return jobTracker;
+    }
+
+    /**
+     * Get the single part file status and make sure there is only one part
+     *
+     * @param fs Filesystem to look for the part file
+     * @param partDirPath Directory where the single part file should exist
+     * @return Single part file status
+     * @throws IOException
+     */
+    public static FileStatus getSinglePartFileStatus(Job job,
+                                                     Path partDirPath)
+            throws IOException {
+        FileSystem fs = FileSystem.get(job.getConfiguration());
+        FileStatus[] statusArray = fs.listStatus(partDirPath);
+        FileStatus singlePartFileStatus = null;
+        int partFiles = 0;
+        for (FileStatus fileStatus : statusArray) {
+            if (fileStatus.getPath().getName().equals("part-m-00000")) {
+                singlePartFileStatus = fileStatus;
+            }
+            if (fileStatus.getPath().getName().startsWith("part-m-")) {
+                ++partFiles;
+            }
+        }
+        if (partFiles != 1) {
+            throw new ArithmeticException(
+                "getSinglePartFile: Part file count should be 1, but is " +
+                partFiles);
+        }
+        return singlePartFileStatus;
+    }
+
+    @Override
+    public void setUp() {
+        if (jobTracker != null) {
+            System.out.println("Setting tasks to 3 for " + getName() +
+                               " since JobTracker exists...");
+            numWorkers = 3;
+        }
+        try {
+            Configuration conf = new Configuration();
+            FileSystem hdfs = FileSystem.get(conf);
+            // Since local jobs always use the same paths, remove them
+            Path oldLocalJobPaths = new Path(
+                GiraphJob.ZOOKEEPER_MANAGER_DIR_DEFAULT);
+            FileStatus [] fileStatusArr = hdfs.listStatus(oldLocalJobPaths);
+            for (FileStatus fileStatus : fileStatusArr) {
+                if (fileStatus.isDir() &&
+                        fileStatus.getPath().getName().contains("job_local")) {
+                    System.out.println("Cleaning up local job path " +
+                                       fileStatus.getPath().getName());
+                    hdfs.delete(oldLocalJobPaths, true);
+                }
+            }
+            if (zkList == null) {
+                return;
+            }
+            ZooKeeperExt zooKeeperExt =
+                new ZooKeeperExt(zkList, 30*1000, this);
+            List<String> rootChildren = zooKeeperExt.getChildren("/", false);
+            for (String rootChild : rootChildren) {
+                if (rootChild.startsWith("_hadoopBsp")) {
+                    List<String> children =
+                        zooKeeperExt.getChildren("/" + rootChild, false);
+                    for (String child: children) {
+                        if (child.contains("job_local_")) {
+                            System.out.println("Cleaning up /_hadoopBsp/" +
+                                               child);
+                            zooKeeperExt.deleteExt(
+                                "/_hadoopBsp/" + child, -1, true);
+                        }
+                    }
+                }
+            }
+            zooKeeperExt.close();
+        } catch (Exception e) {
+            throw new RuntimeException(e);
+        }
+    }
+
+    @Override
+    public void process(WatchedEvent event) {
+        // Do nothing
+    }
+
+    /**
+     * Helper method to remove an old output directory if it exists,
+     * and set the output path for any VertexOutputFormat that uses
+     * FileOutputFormat.
+     *
+     * @param job Job to set the output path for
+     * @param outputPathString Path to output as a string
+     * @throws IOException
+     */
+    public static void removeAndSetOutput(GiraphJob job,
+                                          Path outputPath)
+            throws IOException {
+        remove(job.getConfiguration(), outputPath);
+        FileOutputFormat.setOutputPath(job, outputPath);
+    }
+    
+    /**
+     * Helper method to remove a path if it exists.
+     * 
+     * @param conf Configutation
+     * @param path Path to remove
+     * @throws IOException
+     */
+    public static void remove(Configuration conf, Path path) 
+            throws IOException {
+        FileSystem hdfs = FileSystem.get(conf);
+        hdfs.delete(path, true);
+    }
+
+    public static String getCallingMethodName() {
+        return Thread.currentThread().getStackTrace()[2].getMethodName();
+    }
+}
diff --git a/src/test/java/org/apache/giraph/TestAutoCheckpoint.java b/src/test/java/org/apache/giraph/TestAutoCheckpoint.java
new file mode 100644
index 0000000..2ae9d8e
--- /dev/null
+++ b/src/test/java/org/apache/giraph/TestAutoCheckpoint.java
@@ -0,0 +1,90 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph;
+
+import java.io.IOException;
+
+import org.apache.hadoop.fs.Path;
+
+import org.apache.giraph.examples.SimpleCheckpointVertex;
+import org.apache.giraph.examples.SimpleSuperstepVertex.SimpleSuperstepVertexInputFormat;
+import org.apache.giraph.examples.SimpleSuperstepVertex.SimpleSuperstepVertexOutputFormat;
+import org.apache.giraph.graph.GiraphJob;
+import junit.framework.Test;
+import junit.framework.TestSuite;
+
+/**
+ * Unit test for automated checkpoint restarting
+ */
+public class TestAutoCheckpoint extends BspCase {
+    /** Where the checkpoints will be stored and restarted */
+    private final String HDFS_CHECKPOINT_DIR =
+        "/tmp/testBspCheckpoints";
+
+    /**
+     * Create the test case
+     *
+     * @param testName name of the test case
+     */
+    public TestAutoCheckpoint(String testName) {
+        super(testName);
+    }
+
+    /**
+     * @return the suite of tests being tested
+     */
+    public static Test suite() {
+        return new TestSuite(TestAutoCheckpoint.class);
+    }
+
+    /**
+     * Run a job that requires checkpointing and will have a worker crash
+     * and still recover from a previous checkpoint.
+     *
+     * @throws IOException
+     * @throws ClassNotFoundException
+     * @throws InterruptedException
+     */
+    public void testSingleFault()
+            throws IOException, InterruptedException, ClassNotFoundException {
+        if (getJobTracker() == null) {
+            System.out.println(
+                "testSingleFault: Ignore this test in local mode.");
+            return;
+        }
+        GiraphJob job = new GiraphJob(getCallingMethodName());
+        setupConfiguration(job);
+        job.getConfiguration().setBoolean(SimpleCheckpointVertex.ENABLE_FAULT,
+                                          true);
+        job.getConfiguration().setInt("mapred.map.max.attempts", 4);
+        job.getConfiguration().setInt(GiraphJob.POLL_MSECS, 5000);
+        job.getConfiguration().set(GiraphJob.CHECKPOINT_DIRECTORY,
+                                   HDFS_CHECKPOINT_DIR);
+        job.getConfiguration().setBoolean(
+            GiraphJob.CLEANUP_CHECKPOINTS_AFTER_SUCCESS, false);
+        job.setVertexClass(SimpleCheckpointVertex.class);
+        job.setVertexInputFormatClass(SimpleSuperstepVertexInputFormat.class);
+        job.setVertexOutputFormatClass(SimpleSuperstepVertexOutputFormat.class);
+        job.setWorkerContextClass(
+            SimpleCheckpointVertex.SimpleCheckpointVertexWorkerContext.class);
+        Path outputPath = new Path("/tmp/" + getCallingMethodName());
+        removeAndSetOutput(job, outputPath);
+        assertTrue(job.run(true));
+    }
+}
diff --git a/src/test/java/org/apache/giraph/TestBspBasic.java b/src/test/java/org/apache/giraph/TestBspBasic.java
new file mode 100644
index 0000000..96e3649
--- /dev/null
+++ b/src/test/java/org/apache/giraph/TestBspBasic.java
@@ -0,0 +1,402 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph;
+
+import junit.framework.Test;
+import junit.framework.TestSuite;
+
+import org.apache.giraph.examples.SimpleAggregatorWriter;
+import org.apache.giraph.examples.SimplePageRankVertex.SimplePageRankVertexInputFormat;
+import org.apache.giraph.examples.SimpleShortestPathsVertex.SimpleShortestPathsVertexOutputFormat;
+import org.apache.giraph.examples.SimpleSuperstepVertex.SimpleSuperstepVertexInputFormat;
+import org.apache.giraph.examples.SimpleSuperstepVertex.SimpleSuperstepVertexOutputFormat;
+import org.apache.giraph.examples.GeneratedVertexReader;
+import org.apache.giraph.examples.SimpleCombinerVertex;
+import org.apache.giraph.examples.SimpleFailVertex;
+import org.apache.giraph.examples.SimpleMsgVertex;
+import org.apache.giraph.examples.SimplePageRankVertex;
+import org.apache.giraph.examples.SimpleShortestPathsVertex;
+import org.apache.giraph.examples.SimpleSumCombiner;
+import org.apache.giraph.examples.SimpleSuperstepVertex;
+import org.apache.giraph.graph.BspUtils;
+import org.apache.giraph.graph.GiraphJob;
+import org.apache.giraph.graph.GraphState;
+import org.apache.giraph.graph.VertexInputFormat;
+import org.apache.giraph.graph.BasicVertex;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.FloatWritable;
+import org.apache.hadoop.io.IntWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.JobID;
+
+import java.io.ByteArrayOutputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.lang.reflect.InvocationTargetException;
+import java.util.List;
+
+/**
+ * Unit test for many simple BSP applications.
+ */
+public class TestBspBasic extends BspCase {
+    /**
+     * Create the test case
+     *
+     * @param testName name of the test case
+     */
+    public TestBspBasic(String testName) {
+        super(testName);
+    }
+
+    /**
+     * @return the suite of tests being tested
+     */
+    public static Test suite() {
+        return new TestSuite(TestBspBasic.class);
+    }
+
+    /**
+     * Just instantiate the vertex (all functions are implemented) and the
+     * VertexInputFormat using reflection.
+     *
+     * @throws IllegalAccessException
+     * @throws InstantiationException
+     * @throws InterruptedException
+     * @throws IOException
+     * @throws InvocationTargetException
+     * @throws IllegalArgumentException
+     * @throws NoSuchMethodException
+     * @throws SecurityException
+     */
+    public void testInstantiateVertex()
+            throws InstantiationException, IllegalAccessException,
+            IOException, InterruptedException, IllegalArgumentException,
+        InvocationTargetException, SecurityException, NoSuchMethodException {
+        System.out.println("testInstantiateVertex: java.class.path=" +
+                           System.getProperty("java.class.path"));
+        GiraphJob job = new GiraphJob(getCallingMethodName());
+        job.setVertexClass(SimpleSuperstepVertex.class);
+        job.setVertexInputFormatClass(
+            SimpleSuperstepVertex.SimpleSuperstepVertexInputFormat.class);
+        GraphState<LongWritable, IntWritable, FloatWritable, IntWritable> gs =
+            new GraphState<LongWritable, IntWritable,
+                           FloatWritable, IntWritable>();
+        BasicVertex<LongWritable, IntWritable, FloatWritable, IntWritable> vertex =
+            BspUtils.createVertex(job.getConfiguration());
+        vertex.initialize(null, null, null, null);
+        System.out.println("testInstantiateVertex: Got vertex " + vertex +
+                           ", graphState" + gs);
+        VertexInputFormat<LongWritable, IntWritable, FloatWritable, IntWritable>
+            inputFormat = BspUtils.createVertexInputFormat(job.getConfiguration());
+        List<InputSplit> splitArray =
+            inputFormat.getSplits(
+                new JobContext(new Configuration(), new JobID()), 1);
+        ByteArrayOutputStream byteArrayOutputStream =
+            new ByteArrayOutputStream();
+        DataOutputStream outputStream =
+            new DataOutputStream(byteArrayOutputStream);
+        ((Writable) splitArray.get(0)).write(outputStream);
+        System.out.println("testInstantiateVertex: Example output split = " +
+                           byteArrayOutputStream.toString());
+    }
+
+    /**
+     * Do some checks for local job runner.
+     *
+     * @throws IOException
+     * @throws ClassNotFoundException
+     * @throws InterruptedException
+     */
+    public void testLocalJobRunnerConfig()
+            throws IOException, InterruptedException, ClassNotFoundException {
+        if (getJobTracker() != null) {
+            System.out.println("testLocalJobRunnerConfig: Skipping for " +
+                               "non-local");
+            return;
+        }
+        GiraphJob job = new GiraphJob(getCallingMethodName());
+        setupConfiguration(job);
+        job.setWorkerConfiguration(5, 5, 100.0f);
+        job.getConfiguration().setBoolean(GiraphJob.SPLIT_MASTER_WORKER, true);
+        job.setVertexClass(SimpleSuperstepVertex.class);
+        job.setVertexInputFormatClass(SimpleSuperstepVertexInputFormat.class);
+        try {
+            job.run(true);
+            assertTrue(false);
+        } catch (IllegalArgumentException e) {
+        }
+
+        job.getConfiguration().setBoolean(GiraphJob.SPLIT_MASTER_WORKER, false);
+        try {
+            job.run(true);
+            assertTrue(false);
+        } catch (IllegalArgumentException e) {
+        }
+        job.setWorkerConfiguration(1, 1, 100.0f);
+        job.run(true);
+    }
+
+    /**
+     * Run a sample BSP job in JobTracker, kill a task, and make sure
+     * the job fails (not enough attempts to restart)
+     *
+     * @throws IOException
+     * @throws ClassNotFoundException
+     * @throws InterruptedException
+     */
+    public void testBspFail()
+            throws IOException, InterruptedException, ClassNotFoundException {
+        // Allow this test only to be run on a real Hadoop setup
+        if (getJobTracker() == null) {
+            System.out.println("testBspFail: not executed for local setup.");
+            return;
+        }
+
+        GiraphJob job = new GiraphJob(getCallingMethodName());
+        setupConfiguration(job);
+        job.getConfiguration().setInt("mapred.map.max.attempts", 1);
+        job.setVertexClass(SimpleFailVertex.class);
+        job.setVertexInputFormatClass(SimplePageRankVertexInputFormat.class);
+        Path outputPath = new Path("/tmp/" + getCallingMethodName());
+        removeAndSetOutput(job, outputPath);
+        assertTrue(!job.run(true));
+    }
+
+    /**
+     * Run a sample BSP job locally and test supersteps.
+     *
+     * @throws IOException
+     * @throws ClassNotFoundException
+     * @throws InterruptedException
+     */
+    public void testBspSuperStep()
+            throws IOException, InterruptedException, ClassNotFoundException {
+        GiraphJob job = new GiraphJob(getCallingMethodName());
+        setupConfiguration(job);
+        job.getConfiguration().setFloat(GiraphJob.TOTAL_INPUT_SPLIT_MULTIPLIER,
+                                        2.0f);
+        // GeneratedInputSplit will generate 10 vertices
+        job.getConfiguration().setLong(GeneratedVertexReader.READER_VERTICES,
+                                       10);
+        job.setVertexClass(SimpleSuperstepVertex.class);
+        job.setVertexInputFormatClass(SimpleSuperstepVertexInputFormat.class);
+        job.setVertexOutputFormatClass(SimpleSuperstepVertexOutputFormat.class);
+        Path outputPath = new Path("/tmp/" + getCallingMethodName());
+        removeAndSetOutput(job, outputPath);
+        assertTrue(job.run(true));
+        if (getJobTracker() == null) {
+            FileStatus fileStatus = getSinglePartFileStatus(job, outputPath);
+            assertTrue(fileStatus.getLen() == 49);
+        }
+    }
+
+    /**
+     * Run a sample BSP job locally and test messages.
+     *
+     * @throws IOException
+     * @throws ClassNotFoundException
+     * @throws InterruptedException
+     */
+    public void testBspMsg()
+            throws IOException, InterruptedException, ClassNotFoundException {
+        GiraphJob job = new GiraphJob(getCallingMethodName());
+        setupConfiguration(job);
+        job.setVertexClass(SimpleMsgVertex.class);
+        job.setVertexInputFormatClass(SimpleSuperstepVertexInputFormat.class);
+        assertTrue(job.run(true));
+    }
+
+
+    /**
+     * Run a sample BSP job locally with no vertices and make sure
+     * it completes.
+     *
+     * @throws IOException
+     * @throws ClassNotFoundException
+     * @throws InterruptedException
+     */
+    public void testEmptyVertexInputFormat()
+            throws IOException, InterruptedException, ClassNotFoundException {
+        GiraphJob job = new GiraphJob(getCallingMethodName());
+        setupConfiguration(job);
+        job.getConfiguration().setLong(GeneratedVertexReader.READER_VERTICES,
+                                       0);
+        job.setVertexClass(SimpleMsgVertex.class);
+        job.setVertexInputFormatClass(SimpleSuperstepVertexInputFormat.class);
+        assertTrue(job.run(true));
+    }
+
+    /**
+     * Run a sample BSP job locally with combiner and checkout output value.
+     *
+     * @throws IOException
+     * @throws ClassNotFoundException
+     * @throws InterruptedException
+     */
+    public void testBspCombiner()
+            throws IOException, InterruptedException, ClassNotFoundException {
+        GiraphJob job = new GiraphJob(getCallingMethodName());
+        setupConfiguration(job);
+        job.setVertexClass(SimpleCombinerVertex.class);
+        job.setVertexInputFormatClass(SimpleSuperstepVertexInputFormat.class);
+        job.setVertexCombinerClass(SimpleSumCombiner.class);
+        assertTrue(job.run(true));
+    }
+
+    /**
+     * Run a sample BSP job locally and test PageRank.
+     *
+     * @throws IOException
+     * @throws ClassNotFoundException
+     * @throws InterruptedException
+     */
+    public void testBspPageRank()
+            throws IOException, InterruptedException, ClassNotFoundException {
+        GiraphJob job = new GiraphJob(getCallingMethodName());
+        setupConfiguration(job);
+        job.setVertexClass(SimplePageRankVertex.class);
+        job.setWorkerContextClass(
+        	SimplePageRankVertex.SimplePageRankVertexWorkerContext.class);
+        job.setVertexInputFormatClass(SimplePageRankVertexInputFormat.class);
+        assertTrue(job.run(true));
+        if (getJobTracker() == null) {
+            double maxPageRank =
+            	SimplePageRankVertex.SimplePageRankVertexWorkerContext.finalMax;
+            double minPageRank =
+            	SimplePageRankVertex.SimplePageRankVertexWorkerContext.finalMin;
+            long numVertices =
+            	SimplePageRankVertex.SimplePageRankVertexWorkerContext.finalSum;
+            System.out.println("testBspPageRank: maxPageRank=" + maxPageRank +
+                               " minPageRank=" + minPageRank +
+                               " numVertices=" + numVertices);
+            assertTrue("34.030 !< " + maxPageRank + " !< " + " 34.0301",
+                maxPageRank > 34.030 && maxPageRank < 34.0301);
+            assertTrue("0.03 !< " + minPageRank + " !< " + "0.03001",
+                minPageRank > 0.03 && minPageRank < 0.03001);
+            assertTrue("numVertices = " + numVertices + " != 5", numVertices == 5);
+        }
+    }
+
+    /**
+     * Run a sample BSP job locally and test shortest paths.
+     *
+     * @throws IOException
+     * @throws ClassNotFoundException
+     * @throws InterruptedException
+     */
+    public void testBspShortestPaths()
+            throws IOException, InterruptedException, ClassNotFoundException {
+        GiraphJob job = new GiraphJob(getCallingMethodName());
+        setupConfiguration(job);
+        job.setVertexClass(SimpleShortestPathsVertex.class);
+        job.setVertexInputFormatClass(SimplePageRankVertexInputFormat.class);
+        job.setVertexOutputFormatClass(
+            SimpleShortestPathsVertexOutputFormat.class);
+        job.getConfiguration().setLong(SimpleShortestPathsVertex.SOURCE_ID, 0);
+        Path outputPath = new Path("/tmp/" + getCallingMethodName());
+        removeAndSetOutput(job, outputPath);
+        assertTrue(job.run(true));
+
+        job = new GiraphJob(getCallingMethodName());
+        setupConfiguration(job);
+        job.setVertexClass(SimpleShortestPathsVertex.class);
+        job.setVertexInputFormatClass(SimplePageRankVertexInputFormat.class);
+        job.setVertexOutputFormatClass(
+            SimpleShortestPathsVertexOutputFormat.class);
+        job.getConfiguration().setLong(SimpleShortestPathsVertex.SOURCE_ID, 0);
+        Path outputPath2 = new Path("/tmp/" + getCallingMethodName() + "2");
+        removeAndSetOutput(job, outputPath2);
+        assertTrue(job.run(true));
+        if (getJobTracker() == null) {
+            FileStatus fileStatus = getSinglePartFileStatus(job, outputPath);
+            FileStatus fileStatus2 = getSinglePartFileStatus(job, outputPath2);
+            assertTrue(fileStatus.getLen() == fileStatus2.getLen());
+        }
+    }
+
+    /**
+     * Run a sample BSP job locally and test PageRank with AggregatorWriter.
+     *
+     * @throws IOException
+     * @throws ClassNotFoundException
+     * @throws InterruptedException
+     */
+    public void testBspPageRankWithAggregatorWriter()
+            throws IOException, InterruptedException, ClassNotFoundException {
+        GiraphJob job = new GiraphJob(getCallingMethodName());
+        setupConfiguration(job);
+        job.setVertexClass(SimplePageRankVertex.class);
+        job.setWorkerContextClass(
+            SimplePageRankVertex.SimplePageRankVertexWorkerContext.class);
+        job.setVertexInputFormatClass(SimplePageRankVertexInputFormat.class);
+        job.setAggregatorWriterClass(SimpleAggregatorWriter.class);
+        Path outputPath = new Path("/tmp/" + getCallingMethodName());
+        removeAndSetOutput(job, outputPath);
+        assertTrue(job.run(true));
+        if (getJobTracker() == null) {
+            double maxPageRank =
+                SimplePageRankVertex.SimplePageRankVertexWorkerContext.finalMax;
+            double minPageRank =
+                SimplePageRankVertex.SimplePageRankVertexWorkerContext.finalMin;
+            long numVertices =
+                SimplePageRankVertex.SimplePageRankVertexWorkerContext.finalSum;
+            System.out.println("testBspPageRank: maxPageRank=" + maxPageRank +
+                               " minPageRank=" + minPageRank +
+                               " numVertices=" + numVertices);
+            FileSystem fs = FileSystem.get(new Configuration());
+            FSDataInputStream input =
+                fs.open(new Path(SimpleAggregatorWriter.filename));
+            int i, all;
+            for (i = 0; ; i++) {
+                all = 0;
+                try {
+                    DoubleWritable max = new DoubleWritable();
+                    max.readFields(input);
+                    all++;
+                    DoubleWritable min = new DoubleWritable();
+                    min.readFields(input);
+                    all++;
+                    LongWritable sum = new LongWritable();
+                    sum.readFields(input);
+                    all++;
+                    if (i > 0) {
+                        assertTrue(max.get() == maxPageRank);
+                        assertTrue(min.get() == minPageRank);
+                        assertTrue(sum.get() == numVertices);
+                    }
+                } catch (IOException e) {
+                    break;
+                }
+            }
+            input.close();
+            // contained all supersteps
+            assertTrue(i == SimplePageRankVertex.MAX_SUPERSTEPS+1 && all == 0);
+            remove(new Configuration(),
+                   new Path(SimpleAggregatorWriter.filename));
+        }
+    }
+}
diff --git a/src/test/java/org/apache/giraph/TestGraphPartitioner.java b/src/test/java/org/apache/giraph/TestGraphPartitioner.java
new file mode 100644
index 0000000..84acd6a
--- /dev/null
+++ b/src/test/java/org/apache/giraph/TestGraphPartitioner.java
@@ -0,0 +1,189 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph;
+
+import java.io.IOException;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.giraph.examples.GeneratedVertexReader;
+import org.apache.giraph.examples.SimpleCheckpointVertex;
+import org.apache.giraph.examples.SimpleSuperstepVertex.SimpleSuperstepVertexInputFormat;
+import org.apache.giraph.examples.SimpleSuperstepVertex.SimpleSuperstepVertexOutputFormat;
+import org.apache.giraph.graph.GiraphJob;
+import org.apache.giraph.graph.partition.HashRangePartitionerFactory;
+import org.apache.giraph.graph.partition.PartitionBalancer;
+import org.apache.giraph.integration.SuperstepHashPartitionerFactory;
+
+import junit.framework.Test;
+import junit.framework.TestSuite;
+
+/**
+ * Unit test for manual checkpoint restarting
+ */
+public class TestGraphPartitioner extends BspCase {
+    /**
+     * Create the test case
+     *
+     * @param testName name of the test case
+     */
+    public TestGraphPartitioner(String testName) {
+        super(testName);
+    }
+
+    /**
+     * @return the suite of tests being tested
+     */
+    public static Test suite() {
+        return new TestSuite(TestGraphPartitioner.class);
+    }
+
+
+
+    /**
+     * Run a sample BSP job locally and test various partitioners and
+     * partition algorithms.
+     *
+     * @throws IOException
+     * @throws ClassNotFoundException
+     * @throws InterruptedException
+     */
+    public void testPartitioners()
+            throws IOException, InterruptedException, ClassNotFoundException {
+        final int correctLen = 123;
+
+        GiraphJob job = new GiraphJob("testVertexBalancer");
+        setupConfiguration(job);
+        job.setVertexClass(SimpleCheckpointVertex.class);
+        job.setWorkerContextClass(
+            SimpleCheckpointVertex.SimpleCheckpointVertexWorkerContext.class);
+        job.setVertexInputFormatClass(SimpleSuperstepVertexInputFormat.class);
+        job.setVertexOutputFormatClass(SimpleSuperstepVertexOutputFormat.class);
+        job.getConfiguration().set(
+            PartitionBalancer.PARTITION_BALANCE_ALGORITHM,
+            PartitionBalancer.VERTICES_BALANCE_ALGORITHM);
+        Path outputPath = new Path("/tmp/testVertexBalancer");
+        removeAndSetOutput(job, outputPath);
+        assertTrue(job.run(true));
+        FileSystem hdfs = FileSystem.get(job.getConfiguration());
+        if (getJobTracker() != null) {
+            FileStatus [] fileStatusArr = hdfs.listStatus(outputPath);
+            int totalLen = 0;
+            for (FileStatus fileStatus : fileStatusArr) {
+                if (fileStatus.getPath().toString().contains("/part-m-")) {
+                    totalLen += fileStatus.getLen();
+                }
+            }
+            assertTrue(totalLen == correctLen);
+        }
+
+        job = new GiraphJob("testHashPartitioner");
+        setupConfiguration(job);
+        job.setVertexClass(SimpleCheckpointVertex.class);
+        job.setWorkerContextClass(
+            SimpleCheckpointVertex.SimpleCheckpointVertexWorkerContext.class);
+        job.setVertexInputFormatClass(SimpleSuperstepVertexInputFormat.class);
+        job.setVertexOutputFormatClass(SimpleSuperstepVertexOutputFormat.class);
+        outputPath = new Path("/tmp/testHashPartitioner");
+        removeAndSetOutput(job, outputPath);
+        assertTrue(job.run(true));
+        if (getJobTracker() != null) {
+            FileStatus [] fileStatusArr = hdfs.listStatus(outputPath);
+            int totalLen = 0;
+            for (FileStatus fileStatus : fileStatusArr) {
+                if (fileStatus.getPath().toString().contains("/part-m-")) {
+                    totalLen += fileStatus.getLen();
+                }
+            }
+            assertTrue(totalLen == correctLen);
+        }
+
+        job = new GiraphJob("testSuperstepHashPartitioner");
+        setupConfiguration(job);
+        job.setVertexClass(SimpleCheckpointVertex.class);
+        job.setWorkerContextClass(
+            SimpleCheckpointVertex.SimpleCheckpointVertexWorkerContext.class);
+        job.setVertexInputFormatClass(SimpleSuperstepVertexInputFormat.class);
+        job.setVertexOutputFormatClass(SimpleSuperstepVertexOutputFormat.class);
+        job.setGraphPartitionerFactoryClass(
+            SuperstepHashPartitionerFactory.class);
+        outputPath = new Path("/tmp/testSuperstepHashPartitioner");
+        removeAndSetOutput(job, outputPath);
+        assertTrue(job.run(true));
+        if (getJobTracker() != null) {
+            FileStatus [] fileStatusArr = hdfs.listStatus(outputPath);
+            int totalLen = 0;
+            for (FileStatus fileStatus : fileStatusArr) {
+                if (fileStatus.getPath().toString().contains("/part-m-")) {
+                    totalLen += fileStatus.getLen();
+                }
+            }
+            assertTrue(totalLen == correctLen);
+        }
+
+        job = new GiraphJob("testHashRangePartitioner");
+        setupConfiguration(job);
+        job.setVertexClass(SimpleCheckpointVertex.class);
+        job.setWorkerContextClass(
+            SimpleCheckpointVertex.SimpleCheckpointVertexWorkerContext.class);
+        job.setVertexInputFormatClass(SimpleSuperstepVertexInputFormat.class);
+        job.setVertexOutputFormatClass(SimpleSuperstepVertexOutputFormat.class);
+        job.setGraphPartitionerFactoryClass(
+            HashRangePartitionerFactory.class);
+        outputPath = new Path("/tmp/testHashRangePartitioner");
+        removeAndSetOutput(job, outputPath);
+        assertTrue(job.run(true));
+        if (getJobTracker() != null) {
+            FileStatus [] fileStatusArr = hdfs.listStatus(outputPath);
+            int totalLen = 0;
+            for (FileStatus fileStatus : fileStatusArr) {
+                if (fileStatus.getPath().toString().contains("/part-m-")) {
+                    totalLen += fileStatus.getLen();
+                }
+            }
+            assertTrue(totalLen == correctLen);
+        }
+
+        job = new GiraphJob("testReverseIdSuperstepHashPartitioner");
+        setupConfiguration(job);
+        job.setVertexClass(SimpleCheckpointVertex.class);
+        job.setWorkerContextClass(
+            SimpleCheckpointVertex.SimpleCheckpointVertexWorkerContext.class);
+        job.setVertexInputFormatClass(SimpleSuperstepVertexInputFormat.class);
+        job.setVertexOutputFormatClass(SimpleSuperstepVertexOutputFormat.class);
+        job.setGraphPartitionerFactoryClass(
+            SuperstepHashPartitionerFactory.class);
+        job.getConfiguration().setBoolean(
+            GeneratedVertexReader.REVERSE_ID_ORDER,
+            true);
+        outputPath = new Path("/tmp/testReverseIdSuperstepHashPartitioner");
+        removeAndSetOutput(job, outputPath);
+        assertTrue(job.run(true));
+        if (getJobTracker() != null) {
+            FileStatus [] fileStatusArr = hdfs.listStatus(outputPath);
+            int totalLen = 0;
+            for (FileStatus fileStatus : fileStatusArr) {
+                if (fileStatus.getPath().toString().contains("/part-m-")) {
+                    totalLen += fileStatus.getLen();
+                }
+            }
+            assertTrue(totalLen == correctLen);
+        }
+    }
+}
diff --git a/src/test/java/org/apache/giraph/TestJsonBase64Format.java b/src/test/java/org/apache/giraph/TestJsonBase64Format.java
new file mode 100644
index 0000000..383ec8e
--- /dev/null
+++ b/src/test/java/org/apache/giraph/TestJsonBase64Format.java
@@ -0,0 +1,115 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph;
+
+import java.io.IOException;
+
+import org.apache.giraph.benchmark.PageRankBenchmark;
+import org.apache.giraph.benchmark.PseudoRandomVertexInputFormat;
+import org.apache.giraph.graph.GiraphJob;
+import org.apache.giraph.lib.JsonBase64VertexInputFormat;
+import org.apache.giraph.lib.JsonBase64VertexOutputFormat;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
+
+import junit.framework.Test;
+import junit.framework.TestSuite;
+
+/**
+ * Test out the JsonBase64 format.
+ */
+public class TestJsonBase64Format extends BspCase {
+    /**
+     * Create the test case
+     *
+     * @param testName name of the test case
+     */
+    public TestJsonBase64Format(String testName) {
+        super(testName);
+    }
+
+    /**
+     * @return the suite of tests being tested
+     */
+    public static Test suite() {
+        return new TestSuite(TestJsonBase64Format.class);
+    }
+
+    /**
+     * Start a job and finish after i supersteps, then begin a new job and
+     * continue on more j supersteps.  Check the results against a single job
+     * with i + j supersteps.
+     *
+     * @throws IOException
+     * @throws ClassNotFoundException
+     * @throws InterruptedException
+     */
+    public void testContinue()
+            throws IOException, InterruptedException, ClassNotFoundException {
+        GiraphJob job = new GiraphJob(getCallingMethodName());
+        setupConfiguration(job);
+        job.setVertexClass(PageRankBenchmark.PageRankEdgeListVertex.class);
+        job.setVertexInputFormatClass(PseudoRandomVertexInputFormat.class);
+        job.setVertexOutputFormatClass(JsonBase64VertexOutputFormat.class);
+        job.getConfiguration().setLong(
+            PseudoRandomVertexInputFormat.AGGREGATE_VERTICES, 101);
+        job.getConfiguration().setLong(
+            PseudoRandomVertexInputFormat.EDGES_PER_VERTEX, 2);
+        job.getConfiguration().setInt(PageRankBenchmark.SUPERSTEP_COUNT, 2);
+        Path outputPath = new Path("/tmp/" + getCallingMethodName());
+        removeAndSetOutput(job, outputPath);
+        assertTrue(job.run(true));
+
+        job = new GiraphJob(getCallingMethodName());
+        setupConfiguration(job);
+        job.setVertexClass(PageRankBenchmark.PageRankEdgeListVertex.class);
+        job.setVertexInputFormatClass(JsonBase64VertexInputFormat.class);
+        job.setVertexOutputFormatClass(JsonBase64VertexOutputFormat.class);
+        job.getConfiguration().setInt(PageRankBenchmark.SUPERSTEP_COUNT, 3);
+        FileInputFormat.setInputPaths(job, outputPath);
+        Path outputPath2 = new Path("/tmp/" + getCallingMethodName() + "2");
+        removeAndSetOutput(job, outputPath2);
+        assertTrue(job.run(true));
+
+        FileStatus twoJobsFile = null;
+        if (getJobTracker() == null) {
+            twoJobsFile = getSinglePartFileStatus(job, outputPath);
+        }
+
+        job = new GiraphJob(getCallingMethodName());
+        setupConfiguration(job);
+        job.setVertexClass(PageRankBenchmark.PageRankEdgeListVertex.class);
+        job.setVertexInputFormatClass(PseudoRandomVertexInputFormat.class);
+        job.setVertexOutputFormatClass(JsonBase64VertexOutputFormat.class);
+        job.getConfiguration().setLong(
+            PseudoRandomVertexInputFormat.AGGREGATE_VERTICES, 101);
+        job.getConfiguration().setLong(
+            PseudoRandomVertexInputFormat.EDGES_PER_VERTEX, 2);
+        job.getConfiguration().setInt(PageRankBenchmark.SUPERSTEP_COUNT, 5);
+        Path outputPath3 = new Path("/tmp/" + getCallingMethodName() + "3");
+        removeAndSetOutput(job, outputPath3);
+        assertTrue(job.run(true));
+
+        if (getJobTracker() == null) {
+            FileStatus oneJobFile = getSinglePartFileStatus(job, outputPath3);
+            assertTrue(twoJobsFile.getLen() == oneJobFile.getLen());
+        }
+    }
+}
diff --git a/src/test/java/org/apache/giraph/TestManualCheckpoint.java b/src/test/java/org/apache/giraph/TestManualCheckpoint.java
new file mode 100644
index 0000000..d4252c8
--- /dev/null
+++ b/src/test/java/org/apache/giraph/TestManualCheckpoint.java
@@ -0,0 +1,120 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph;
+
+import java.io.IOException;
+
+import org.apache.giraph.examples.SimpleCheckpointVertex;
+import org.apache.giraph.examples.SimpleSuperstepVertex.SimpleSuperstepVertexOutputFormat;
+import org.apache.giraph.examples.SimpleSuperstepVertex.SimpleSuperstepVertexInputFormat;
+import org.apache.giraph.graph.GiraphJob;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+import junit.framework.Test;
+import junit.framework.TestSuite;
+
+/**
+ * Unit test for manual checkpoint restarting
+ */
+public class TestManualCheckpoint extends BspCase {
+    /** Where the checkpoints will be stored and restarted */
+    private final String HDFS_CHECKPOINT_DIR =
+        "/tmp/testBspCheckpoints";
+
+    /**
+     * Create the test case
+     *
+     * @param testName name of the test case
+     */
+    public TestManualCheckpoint(String testName) {
+        super(testName);
+    }
+
+    /**
+     * @return the suite of tests being tested
+     */
+    public static Test suite() {
+        return new TestSuite(TestManualCheckpoint.class);
+    }
+
+    /**
+     * Run a sample BSP job locally and test checkpointing.
+     * @throws IOException
+     * @throws ClassNotFoundException
+     * @throws InterruptedException
+     */
+    public void testBspCheckpoint()
+            throws IOException, InterruptedException, ClassNotFoundException {
+        GiraphJob job = new GiraphJob(getCallingMethodName());
+        setupConfiguration(job);
+        job.getConfiguration().set(GiraphJob.CHECKPOINT_DIRECTORY,
+                                   HDFS_CHECKPOINT_DIR);
+        job.getConfiguration().setBoolean(
+            GiraphJob.CLEANUP_CHECKPOINTS_AFTER_SUCCESS, false);
+        job.setVertexClass(SimpleCheckpointVertex.class);
+        job.setWorkerContextClass(
+            SimpleCheckpointVertex.SimpleCheckpointVertexWorkerContext.class);
+        job.setVertexInputFormatClass(SimpleSuperstepVertexInputFormat.class);
+        job.setVertexOutputFormatClass(SimpleSuperstepVertexOutputFormat.class);
+        Path outputPath = new Path("/tmp/" + getCallingMethodName());
+        removeAndSetOutput(job, outputPath);
+        assertTrue(job.run(true));
+        long fileLen = 0;
+        long idSum = 0;
+        if (getJobTracker() == null) {
+            FileStatus fileStatus = getSinglePartFileStatus(job, outputPath);
+            fileLen = fileStatus.getLen();
+            idSum =
+            	SimpleCheckpointVertex.SimpleCheckpointVertexWorkerContext.finalSum;
+            System.out.println("testBspCheckpoint: idSum = " + idSum +
+                               " fileLen = " + fileLen);
+        }
+
+        // Restart the test from superstep 2
+        System.out.println(
+            "testBspCheckpoint: Restarting from superstep 2" +
+            " with checkpoint path = " + HDFS_CHECKPOINT_DIR);
+        GiraphJob restartedJob = new GiraphJob(getCallingMethodName() +
+                                               "Restarted");
+        setupConfiguration(restartedJob);
+        restartedJob.getConfiguration().set(GiraphJob.CHECKPOINT_DIRECTORY,
+                                            HDFS_CHECKPOINT_DIR);
+        restartedJob.getConfiguration().setLong(GiraphJob.RESTART_SUPERSTEP, 2);
+        restartedJob.setVertexClass(SimpleCheckpointVertex.class);
+        restartedJob.setWorkerContextClass(
+        	SimpleCheckpointVertex.SimpleCheckpointVertexWorkerContext.class);
+        restartedJob.setVertexInputFormatClass(
+            SimpleSuperstepVertexInputFormat.class);
+        restartedJob.setVertexOutputFormatClass(
+            SimpleSuperstepVertexOutputFormat.class);
+        outputPath = new Path("/tmp/" + getCallingMethodName() + "Restarted");
+        removeAndSetOutput(restartedJob, outputPath);
+        assertTrue(restartedJob.run(true));
+        if (getJobTracker() == null) {
+            FileStatus fileStatus = getSinglePartFileStatus(job, outputPath);
+            fileLen = fileStatus.getLen();
+            assertTrue(fileStatus.getLen() == fileLen);
+            long idSumRestarted =
+            	SimpleCheckpointVertex.SimpleCheckpointVertexWorkerContext.finalSum;
+            System.out.println("testBspCheckpoint: idSumRestarted = " +
+                               idSumRestarted);
+            assertTrue(idSum == idSumRestarted);
+        }
+    }
+}
diff --git a/src/test/java/org/apache/giraph/TestMutateGraphVertex.java b/src/test/java/org/apache/giraph/TestMutateGraphVertex.java
new file mode 100644
index 0000000..250de81
--- /dev/null
+++ b/src/test/java/org/apache/giraph/TestMutateGraphVertex.java
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph;
+
+import java.io.IOException;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.giraph.examples.SimpleMutateGraphVertex;
+import org.apache.giraph.examples.SimplePageRankVertex.SimplePageRankVertexInputFormat;
+import org.apache.giraph.examples.SimplePageRankVertex.SimplePageRankVertexOutputFormat;
+import org.apache.giraph.graph.GiraphJob;
+
+import junit.framework.Test;
+import junit.framework.TestSuite;
+
+/**
+ * Unit test for graph mutation
+ */
+public class TestMutateGraphVertex extends BspCase {
+    /**
+     * Create the test case
+     *
+     * @param testName name of the test case
+     */
+    public TestMutateGraphVertex(String testName) {
+        super(testName);
+    }
+
+    /**
+     * @return the suite of tests being tested
+     */
+    public static Test suite() {
+        return new TestSuite(TestMutateGraphVertex.class);
+    }
+
+    /**
+     * Run a job that tests the various graph mutations that can occur
+     *
+     * @throws IOException
+     * @throws ClassNotFoundException
+     * @throws InterruptedException
+     */
+    public void testMutateGraph()
+            throws IOException, InterruptedException, ClassNotFoundException {
+        GiraphJob job = new GiraphJob(getCallingMethodName());
+        setupConfiguration(job);
+        job.setVertexClass(SimpleMutateGraphVertex.class);
+        job.setWorkerContextClass(
+            SimpleMutateGraphVertex.SimpleMutateGraphVertexWorkerContext.class);
+        job.setVertexInputFormatClass(SimplePageRankVertexInputFormat.class);
+        job.setVertexOutputFormatClass(SimplePageRankVertexOutputFormat.class);
+        Path outputPath = new Path("/tmp/" + getCallingMethodName());
+        removeAndSetOutput(job, outputPath);
+        assertTrue(job.run(true));
+    }
+}
diff --git a/src/test/java/org/apache/giraph/TestNotEnoughMapTasks.java b/src/test/java/org/apache/giraph/TestNotEnoughMapTasks.java
new file mode 100644
index 0000000..16f2ab2
--- /dev/null
+++ b/src/test/java/org/apache/giraph/TestNotEnoughMapTasks.java
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph;
+
+import java.io.IOException;
+
+import org.apache.giraph.examples.SimpleCheckpointVertex;
+import org.apache.giraph.examples.SimpleSuperstepVertex.SimpleSuperstepVertexOutputFormat;
+import org.apache.giraph.examples.SimpleSuperstepVertex.SimpleSuperstepVertexInputFormat;
+import org.apache.giraph.graph.GiraphJob;
+import org.apache.hadoop.fs.Path;
+import junit.framework.Test;
+import junit.framework.TestSuite;
+
+/**
+ * Unit test for not enough map tasks
+ */
+public class TestNotEnoughMapTasks extends BspCase {
+    /**
+     * Create the test case
+     *
+     * @param testName name of the test case
+     */
+    public TestNotEnoughMapTasks(String testName) {
+        super(testName);
+    }
+
+    /**
+     * @return the suite of tests being tested
+     */
+    public static Test suite() {
+        return new TestSuite(TestNotEnoughMapTasks.class);
+    }
+
+    /**
+     * This job should always fail gracefully with not enough map tasks.
+     *
+     * @throws IOException
+     * @throws ClassNotFoundException
+     * @throws InterruptedException
+     */
+    public void testNotEnoughMapTasks()
+            throws IOException, InterruptedException, ClassNotFoundException {
+        if (getJobTracker() == null) {
+            System.out.println(
+                "testNotEnoughMapTasks: Ignore this test in local mode.");
+            return;
+        }
+        GiraphJob job = new GiraphJob(getCallingMethodName());
+        setupConfiguration(job);
+        // An unlikely impossible number of workers to achieve
+        final int unlikelyWorkers = Short.MAX_VALUE;
+        job.setWorkerConfiguration(
+            unlikelyWorkers, unlikelyWorkers, 100.0f);
+        // Only one poll attempt of one second to make failure faster
+        job.getConfiguration().setInt(GiraphJob.POLL_ATTEMPTS, 1);
+        job.getConfiguration().setInt(GiraphJob.POLL_MSECS, 1);
+        job.setVertexClass(SimpleCheckpointVertex.class);
+        job.setVertexInputFormatClass(SimpleSuperstepVertexInputFormat.class);
+        job.setVertexOutputFormatClass(SimpleSuperstepVertexOutputFormat.class);
+        Path outputPath = new Path("/tmp/" + getCallingMethodName());
+        removeAndSetOutput(job, outputPath);
+        assertFalse(job.run(false));
+    }
+}
diff --git a/src/test/java/org/apache/giraph/TestPredicateLock.java b/src/test/java/org/apache/giraph/TestPredicateLock.java
new file mode 100644
index 0000000..9e06911
--- /dev/null
+++ b/src/test/java/org/apache/giraph/TestPredicateLock.java
@@ -0,0 +1,84 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph;
+
+import junit.framework.TestCase;
+
+import org.apache.giraph.zk.BspEvent;
+import org.apache.giraph.zk.PredicateLock;
+
+/**
+ * Ensure that PredicateLock objects work correctly.
+ */
+public class TestPredicateLock extends TestCase {
+    private static class SignalThread extends Thread {
+        private final BspEvent event;
+        public SignalThread(BspEvent event) {
+            this.event = event;
+        }
+        public void run() {
+            try {
+                Thread.sleep(500);
+            } catch (InterruptedException e) {
+            }
+            event.signal();
+        }
+    }
+
+    /**
+     * Make sure the the event is not signaled.
+     */
+    public void testWaitMsecsNoEvent() {
+        BspEvent event = new PredicateLock();
+        boolean gotPredicate = event.waitMsecs(50);
+        assertTrue(gotPredicate == false);
+    }
+
+    /**
+     * Single threaded case
+     */
+    public void testEvent() {
+        BspEvent event = new PredicateLock();
+        event.signal();
+        boolean gotPredicate = event.waitMsecs(-1);
+        assertTrue(gotPredicate == true);
+        event.reset();
+        gotPredicate = event.waitMsecs(0);
+        assertTrue(gotPredicate == false);
+    }
+
+    /**
+     * Make sure the the event is signaled correctly
+     * @throws InterruptedException
+     */
+    public void testWaitMsecs() {
+        System.out.println("testWaitMsecs:");
+        BspEvent event = new PredicateLock();
+        Thread signalThread = new SignalThread(event);
+        signalThread.start();
+        boolean gotPredicate = event.waitMsecs(2000);
+        assertTrue(gotPredicate == true);
+        try {
+            signalThread.join();
+        } catch (InterruptedException e) {
+        }
+        gotPredicate = event.waitMsecs(0);
+        assertTrue(gotPredicate == true);
+    }
+}
diff --git a/src/test/java/org/apache/giraph/TestVertexTypes.java b/src/test/java/org/apache/giraph/TestVertexTypes.java
new file mode 100644
index 0000000..c189604
--- /dev/null
+++ b/src/test/java/org/apache/giraph/TestVertexTypes.java
@@ -0,0 +1,209 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.List;
+
+import junit.framework.TestCase;
+
+import org.apache.giraph.examples.GeneratedVertexInputFormat;
+import org.apache.giraph.examples.SimpleSuperstepVertex.SimpleSuperstepVertexInputFormat;
+import org.apache.giraph.graph.BasicVertex;
+import org.apache.giraph.graph.GiraphJob;
+import org.apache.giraph.graph.EdgeListVertex;
+import org.apache.giraph.graph.VertexCombiner;
+import org.apache.giraph.graph.VertexInputFormat;
+import org.apache.giraph.graph.GraphMapper;
+import org.apache.giraph.graph.VertexOutputFormat;
+import org.apache.giraph.lib.JsonBase64VertexInputFormat;
+import org.apache.giraph.lib.JsonBase64VertexOutputFormat;
+import org.apache.giraph.utils.EmptyIterable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.FloatWritable;
+import org.apache.hadoop.io.IntWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Writable;
+
+
+public class TestVertexTypes
+    extends TestCase {
+
+    /**
+     * Matches the {@link GeneratedVertexInputFormat}
+     */
+    private static class GeneratedVertexMatch extends
+            EdgeListVertex<LongWritable, IntWritable, FloatWritable,
+            FloatWritable> {
+        @Override
+        public void compute(Iterator<FloatWritable> msgIterator)
+                throws IOException {
+        }
+    }
+
+    /**
+     * Matches the {@link GeneratedVertexInputFormat}
+     */
+    private static class DerivedVertexMatch extends GeneratedVertexMatch {
+    }
+
+    /**
+     * Mismatches the {@link GeneratedVertexInputFormat}
+     */
+    private static class GeneratedVertexMismatch extends
+            EdgeListVertex<LongWritable, FloatWritable, FloatWritable,
+            FloatWritable> {
+        @Override
+        public void compute(Iterator<FloatWritable> msgIterator)
+                throws IOException {
+        }
+    }
+
+    /**
+     * Matches the {@link GeneratedVertexMatch}
+     */
+    private static class GeneratedVertexMatchCombiner extends
+            VertexCombiner<LongWritable, FloatWritable> {
+
+        @Override
+        public Iterable<FloatWritable> combine(LongWritable vertexIndex,
+                Iterable<FloatWritable> msgList) throws IOException {
+            return new EmptyIterable<FloatWritable>();
+        }
+    }
+
+    /**
+     * Mismatches the {@link GeneratedVertexMatch}
+     */
+    private static class GeneratedVertexMismatchCombiner extends
+            VertexCombiner<LongWritable, DoubleWritable> {
+
+        @Override
+        public Iterable<DoubleWritable> combine(LongWritable vertexIndex,
+                Iterable<DoubleWritable> msgList)
+                throws IOException {
+            return new EmptyIterable<DoubleWritable>();
+        }
+    }
+
+    public void testMatchingType() throws SecurityException,
+            NoSuchMethodException, NoSuchFieldException {
+        @SuppressWarnings("rawtypes")
+        GraphMapper<?, ?, ?, ?> mapper = new GraphMapper();
+        Configuration conf = new Configuration();
+        conf.setClass(GiraphJob.VERTEX_CLASS,
+                      GeneratedVertexMatch.class,
+                      BasicVertex.class);
+        conf.setClass(GiraphJob.VERTEX_INPUT_FORMAT_CLASS,
+                      SimpleSuperstepVertexInputFormat.class,
+                      VertexInputFormat.class);
+        conf.setClass(GiraphJob.VERTEX_COMBINER_CLASS,
+                      GeneratedVertexMatchCombiner.class,
+                      VertexCombiner.class);
+        mapper.determineClassTypes(conf);
+    }
+
+    public void testDerivedMatchingType() throws SecurityException,
+            NoSuchMethodException, NoSuchFieldException {
+        @SuppressWarnings("rawtypes")
+        GraphMapper<?, ?, ?, ?> mapper = new GraphMapper();
+        Configuration conf = new Configuration();
+        conf.setClass(GiraphJob.VERTEX_CLASS,
+                      DerivedVertexMatch.class,
+                      BasicVertex.class);
+        conf.setClass(GiraphJob.VERTEX_INPUT_FORMAT_CLASS,
+                      SimpleSuperstepVertexInputFormat.class,
+                      VertexInputFormat.class);
+        mapper.determineClassTypes(conf);
+    }
+
+    public void testDerivedInputFormatType() throws SecurityException,
+            NoSuchMethodException, NoSuchFieldException {
+        @SuppressWarnings("rawtypes")
+        GraphMapper<?, ?, ?, ?> mapper = new GraphMapper();
+        Configuration conf = new Configuration();
+        conf.setClass(GiraphJob.VERTEX_CLASS,
+                      DerivedVertexMatch.class,
+                      BasicVertex.class);
+        conf.setClass(GiraphJob.VERTEX_INPUT_FORMAT_CLASS,
+                      SimpleSuperstepVertexInputFormat.class,
+                      VertexInputFormat.class);
+        mapper.determineClassTypes(conf);
+    }
+
+    public void testMismatchingVertex() throws SecurityException,
+            NoSuchMethodException, NoSuchFieldException {
+        @SuppressWarnings("rawtypes")
+        GraphMapper<?, ?, ?, ?> mapper = new GraphMapper();
+        Configuration conf = new Configuration();
+        conf.setClass(GiraphJob.VERTEX_CLASS,
+                      GeneratedVertexMismatch.class,
+                      BasicVertex.class);
+        conf.setClass(GiraphJob.VERTEX_INPUT_FORMAT_CLASS,
+                      SimpleSuperstepVertexInputFormat.class,
+                      VertexInputFormat.class);
+        try {
+            mapper.determineClassTypes(conf);
+            throw new RuntimeException(
+                "testMismatchingVertex: Should have caught an exception!");
+        } catch (IllegalArgumentException e) {
+        }
+    }
+
+    public void testMismatchingCombiner() throws SecurityException,
+            NoSuchMethodException, NoSuchFieldException {
+        @SuppressWarnings("rawtypes")
+        GraphMapper<?, ?, ?, ?> mapper = new GraphMapper();
+        Configuration conf = new Configuration();
+        conf.setClass(GiraphJob.VERTEX_CLASS,
+                      GeneratedVertexMatch.class,
+                      BasicVertex.class);
+        conf.setClass(GiraphJob.VERTEX_INPUT_FORMAT_CLASS,
+                      SimpleSuperstepVertexInputFormat.class,
+                      VertexInputFormat.class);
+        conf.setClass(GiraphJob.VERTEX_COMBINER_CLASS,
+                      GeneratedVertexMismatchCombiner.class,
+                      VertexCombiner.class);
+        try {
+            mapper.determineClassTypes(conf);
+            throw new RuntimeException(
+                "testMismatchingCombiner: Should have caught an exception!");
+        } catch (IllegalArgumentException e) {
+        }
+    }
+
+    public void testJsonBase64FormatType() throws SecurityException,
+            NoSuchMethodException, NoSuchFieldException {
+        @SuppressWarnings("rawtypes")
+        GraphMapper<?, ?, ?, ?> mapper = new GraphMapper();
+        Configuration conf = new Configuration();
+        conf.setClass(GiraphJob.VERTEX_CLASS,
+                      GeneratedVertexMatch.class,
+                      BasicVertex.class);
+        conf.setClass(GiraphJob.VERTEX_INPUT_FORMAT_CLASS,
+                      JsonBase64VertexInputFormat.class,
+                      VertexInputFormat.class);
+        conf.setClass(GiraphJob.VERTEX_OUTPUT_FORMAT_CLASS,
+                      JsonBase64VertexOutputFormat.class,
+                      VertexOutputFormat.class);
+        mapper.determineClassTypes(conf);
+    }
+}
diff --git a/src/test/java/org/apache/giraph/TestZooKeeperExt.java b/src/test/java/org/apache/giraph/TestZooKeeperExt.java
new file mode 100644
index 0000000..a6d51dc
--- /dev/null
+++ b/src/test/java/org/apache/giraph/TestZooKeeperExt.java
@@ -0,0 +1,157 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph;
+
+import java.util.List;
+
+import org.apache.giraph.zk.ZooKeeperExt;
+import org.apache.zookeeper.CreateMode;
+import org.apache.zookeeper.KeeperException;
+import org.apache.zookeeper.WatchedEvent;
+import org.apache.zookeeper.Watcher;
+import org.apache.zookeeper.ZooDefs.Ids;
+
+import junit.framework.TestCase;
+
+public class TestZooKeeperExt
+        extends TestCase implements Watcher {
+    /** ZooKeeperExt instance */
+    private ZooKeeperExt zooKeeperExt = null;
+    /** ZooKeeper server list */
+    private String zkList = System.getProperty("prop.zookeeper.list");
+
+    public final String BASE_PATH = "/_zooKeeperExtTest";
+    public final String FIRST_PATH = "/_first";
+
+    public void process(WatchedEvent event) {
+        return;
+    }
+
+    @Override
+    public void setUp() {
+        try {
+            if (zkList == null) {
+                return;
+            }
+            zooKeeperExt =
+                new ZooKeeperExt(zkList, 30*1000, this);
+            zooKeeperExt.deleteExt(BASE_PATH, -1, true);
+        } catch (KeeperException.NoNodeException e) {
+            System.out.println("Clean start: No node " + BASE_PATH);
+        } catch (Exception e) {
+            throw new RuntimeException(e);
+        }
+    }
+
+    @Override
+    public void tearDown() {
+        if (zooKeeperExt == null) {
+            return;
+        }
+        try {
+            zooKeeperExt.close();
+        } catch (InterruptedException e) {
+            throw new RuntimeException(e);
+        }
+    }
+
+    public void testCreateExt() throws KeeperException, InterruptedException {
+        if (zooKeeperExt == null) {
+            System.out.println(
+                "testCreateExt: No prop.zookeeper.list set, skipping test");
+            return;
+        }
+        System.out.println("Created: " +
+            zooKeeperExt.createExt(
+                BASE_PATH + FIRST_PATH,
+                null,
+                Ids.OPEN_ACL_UNSAFE,
+                CreateMode.PERSISTENT,
+                true));
+        zooKeeperExt.delete(BASE_PATH + FIRST_PATH, -1);
+        zooKeeperExt.delete(BASE_PATH, -1);
+    }
+
+    public void testDeleteExt() throws KeeperException, InterruptedException {
+        if (zooKeeperExt == null) {
+            System.out.println(
+                "testDeleteExt: No prop.zookeeper.list set, skipping test");
+            return;
+        }
+        zooKeeperExt.create(BASE_PATH,
+                              null,
+                              Ids.OPEN_ACL_UNSAFE,
+                              CreateMode.PERSISTENT);
+        zooKeeperExt.create(BASE_PATH + FIRST_PATH,
+                                null,
+                                Ids.OPEN_ACL_UNSAFE,
+                                CreateMode.PERSISTENT);
+        try {
+            zooKeeperExt.deleteExt(BASE_PATH, -1, false);
+        } catch (KeeperException.NotEmptyException e) {
+            System.out.println(
+                "Correctly failed to delete since not recursive");
+        }
+        zooKeeperExt.deleteExt(BASE_PATH, -1, true);
+    }
+
+    public void testGetChildrenExt()
+        throws KeeperException, InterruptedException {
+        if (zooKeeperExt == null) {
+           System.out.println(
+               "testGetChildrenExt: No prop.zookeeper.list set, skipping test");
+           return;
+        }
+        zooKeeperExt.create(BASE_PATH,
+                              null,
+                              Ids.OPEN_ACL_UNSAFE,
+                              CreateMode.PERSISTENT);
+        zooKeeperExt.create(BASE_PATH + "/b",
+                null,
+                Ids.OPEN_ACL_UNSAFE,
+                CreateMode.PERSISTENT_SEQUENTIAL);
+        zooKeeperExt.create(BASE_PATH + "/a",
+                null,
+                Ids.OPEN_ACL_UNSAFE,
+                CreateMode.PERSISTENT_SEQUENTIAL);
+        zooKeeperExt.create(BASE_PATH + "/d",
+                null,
+                Ids.OPEN_ACL_UNSAFE,
+                CreateMode.PERSISTENT_SEQUENTIAL);
+        zooKeeperExt.create(BASE_PATH + "/c",
+                null,
+                Ids.OPEN_ACL_UNSAFE,
+                CreateMode.PERSISTENT_SEQUENTIAL);
+        List<String> fullPathList =
+            zooKeeperExt.getChildrenExt(BASE_PATH, false, false, true);
+        for (String fullPath : fullPathList) {
+            assertTrue(fullPath.contains(BASE_PATH + "/"));
+        }
+        List<String> sequenceOrderedList =
+            zooKeeperExt.getChildrenExt(BASE_PATH, false, true, true);
+        for (String fullPath : sequenceOrderedList) {
+            assertTrue(fullPath.contains(BASE_PATH + "/"));
+        }
+        assertTrue(sequenceOrderedList.size() == 4);
+        assertTrue(sequenceOrderedList.get(0).contains("/b"));
+        assertTrue(sequenceOrderedList.get(1).contains("/a"));
+        assertTrue(sequenceOrderedList.get(2).contains("/d"));
+        assertTrue(sequenceOrderedList.get(3).contains("/c"));
+    }
+}
diff --git a/src/test/java/org/apache/giraph/comm/RPCCommunicationsTest.java b/src/test/java/org/apache/giraph/comm/RPCCommunicationsTest.java
new file mode 100644
index 0000000..a7737af
--- /dev/null
+++ b/src/test/java/org/apache/giraph/comm/RPCCommunicationsTest.java
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.comm;
+
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+import junit.framework.TestCase;
+
+import org.apache.giraph.graph.GiraphJob;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.IntWritable;
+import org.apache.hadoop.mapreduce.JobID;
+import org.apache.hadoop.mapreduce.Mapper.Context;
+
+public class RPCCommunicationsTest extends TestCase {
+
+    public void testDuplicateRpcPort() throws Exception {
+        @SuppressWarnings("rawtypes")
+        Context context = mock(Context.class);
+        Configuration conf = new Configuration();
+        conf.setInt("mapred.task.partition", 9);
+        conf.setInt(GiraphJob.MAX_WORKERS, 13);
+        when(context.getConfiguration()).thenReturn(conf);
+        when(context.getJobID()).thenReturn(new JobID());
+
+        RPCCommunications<IntWritable, IntWritable, IntWritable, IntWritable>
+            comm1 =
+                new RPCCommunications<
+                    IntWritable, IntWritable,
+                    IntWritable, IntWritable>(context, null, null);
+        RPCCommunications<IntWritable, IntWritable, IntWritable, IntWritable>
+            comm2 =
+                new RPCCommunications<
+                    IntWritable, IntWritable,
+                    IntWritable, IntWritable>(context, null, null);
+        RPCCommunications<IntWritable, IntWritable, IntWritable, IntWritable>
+            comm3 =
+                new RPCCommunications<
+                    IntWritable, IntWritable,
+                    IntWritable, IntWritable>(context, null, null);
+        assertEquals(comm1.getPort(), 30009);
+        assertEquals(comm2.getPort(), 30109);
+        assertEquals(comm3.getPort(), 30209);
+    }
+}
diff --git a/src/test/java/org/apache/giraph/examples/ConnectedComponentsVertexTest.java b/src/test/java/org/apache/giraph/examples/ConnectedComponentsVertexTest.java
new file mode 100644
index 0000000..7ce75c6
--- /dev/null
+++ b/src/test/java/org/apache/giraph/examples/ConnectedComponentsVertexTest.java
@@ -0,0 +1,110 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.examples;
+
+import com.google.common.base.Splitter;
+import com.google.common.collect.HashMultimap;
+import com.google.common.collect.Iterables;
+import com.google.common.collect.Maps;
+import com.google.common.collect.SetMultimap;
+import junit.framework.TestCase;
+
+import org.apache.giraph.utils.InternalVertexRunner;
+
+import java.util.Set;
+
+/**
+ *  Tests for {@link ConnectedComponentsVertex}
+ */
+public class ConnectedComponentsVertexTest extends TestCase {
+
+    /**
+     * A local integration test on toy data
+     */
+    public void testToyData() throws Exception {
+
+        // a small graph with three components
+        String[] graph = new String[] {
+                "1 2 3",
+                "2 1 4 5",
+                "3 1 4",
+                "4 2 3 5 13",
+                "5 2 4 12 13",
+                "12 5 13",
+                "13 4 5 12",
+
+                "6 7 8",
+                "7 6 10 11",
+                "8 6 10",
+                "10 7 8 11",
+                "11 7 10",
+
+                "9" };
+
+        // run internally
+        Iterable<String> results = InternalVertexRunner.run(
+                ConnectedComponentsVertex.class,
+                MinimumIntCombiner.class,
+                IntIntNullIntTextInputFormat.class,
+                VertexWithComponentTextOutputFormat.class,
+                Maps.<String,String>newHashMap(), graph);
+
+        SetMultimap<Integer,Integer> components = parseResults(results);
+
+        Set<Integer> componentIDs = components.keySet();
+        assertEquals(3, componentIDs.size());
+        assertTrue(componentIDs.contains(1));
+        assertTrue(componentIDs.contains(6));
+        assertTrue(componentIDs.contains(9));
+
+        Set<Integer> componentOne = components.get(1);
+        assertEquals(7, componentOne.size());
+        assertTrue(componentOne.contains(1));
+        assertTrue(componentOne.contains(2));
+        assertTrue(componentOne.contains(3));
+        assertTrue(componentOne.contains(4));
+        assertTrue(componentOne.contains(5));
+        assertTrue(componentOne.contains(12));
+        assertTrue(componentOne.contains(13));
+
+        Set<Integer> componentTwo = components.get(6);
+        assertEquals(5, componentTwo.size());
+        assertTrue(componentTwo.contains(6));
+        assertTrue(componentTwo.contains(7));
+        assertTrue(componentTwo.contains(8));
+        assertTrue(componentTwo.contains(10));
+        assertTrue(componentTwo.contains(11));
+
+        Set<Integer> componentThree = components.get(9);
+        assertEquals(1, componentThree.size());
+        assertTrue(componentThree.contains(9));
+    }
+
+    private SetMultimap<Integer,Integer> parseResults(
+            Iterable<String> results) {
+        SetMultimap<Integer,Integer> components = HashMultimap.create();
+        for (String result : results) {
+            Iterable<String> parts = Splitter.on('\t').split(result);
+            int vertex = Integer.parseInt(Iterables.get(parts, 0));
+            int component = Integer.parseInt(Iterables.get(parts, 1));
+            components.put(component, vertex);
+        }
+        return components;
+    }
+}
diff --git a/src/test/java/org/apache/giraph/examples/MinimumIntCombinerTest.java b/src/test/java/org/apache/giraph/examples/MinimumIntCombinerTest.java
new file mode 100644
index 0000000..c1132ed
--- /dev/null
+++ b/src/test/java/org/apache/giraph/examples/MinimumIntCombinerTest.java
@@ -0,0 +1,44 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.examples;
+
+import junit.framework.TestCase;
+import org.apache.giraph.graph.VertexCombiner;
+import org.apache.hadoop.io.IntWritable;
+
+import com.google.common.collect.Iterables;
+
+import java.io.IOException;
+import java.util.Arrays;
+
+public class MinimumIntCombinerTest extends TestCase {
+
+    public void testCombiner() throws Exception {
+
+        VertexCombiner<IntWritable, IntWritable> combiner =
+                new MinimumIntCombiner();
+
+        Iterable<IntWritable> result = combiner.combine(
+                new IntWritable(1), Arrays.asList(
+                new IntWritable(39947466), new IntWritable(199),
+                new IntWritable(19998888), new IntWritable(42)));
+        assertTrue(result.iterator().hasNext());
+        assertEquals(42, result.iterator().next().get());
+    }
+}
diff --git a/src/test/java/org/apache/giraph/examples/SimpleShortestPathVertexTest.java b/src/test/java/org/apache/giraph/examples/SimpleShortestPathVertexTest.java
new file mode 100644
index 0000000..c1d8617
--- /dev/null
+++ b/src/test/java/org/apache/giraph/examples/SimpleShortestPathVertexTest.java
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.examples;
+
+import com.google.common.collect.Iterables;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import junit.framework.TestCase;
+import org.apache.giraph.utils.InternalVertexRunner;
+import org.apache.giraph.utils.MockUtils;
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.FloatWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.json.JSONArray;
+import org.json.JSONException;
+import org.mockito.Mockito;
+
+import java.util.Map;
+
+/**
+ * Contains a simple unit test for {@link SimpleShortestPathsVertex}
+ */
+public class SimpleShortestPathVertexTest extends TestCase {
+
+    /**
+     * Test the behavior when a shorter path to a vertex has been found
+     */
+    public void testOnShorterPathFound() throws Exception {
+
+        SimpleShortestPathsVertex vertex = new SimpleShortestPathsVertex();
+        vertex.initialize(null, null, null, null);
+        vertex.addEdge(new LongWritable(10L), new FloatWritable(2.5f));
+        vertex.addEdge(new LongWritable(20L), new FloatWritable(0.5f));
+
+        MockUtils.MockedEnvironment<LongWritable, DoubleWritable, FloatWritable,
+                DoubleWritable> env = MockUtils.prepareVertex(vertex, 1L,
+                new LongWritable(7L), new DoubleWritable(Double.MAX_VALUE),
+                false);
+
+        Mockito.when(env.getConfiguration().getLong(
+                SimpleShortestPathsVertex.SOURCE_ID,
+                SimpleShortestPathsVertex.SOURCE_ID_DEFAULT)).thenReturn(2L);
+
+        vertex.compute(Lists.newArrayList(new DoubleWritable(2),
+                new DoubleWritable(1.5)).iterator());
+
+        assertTrue(vertex.isHalted());
+        assertEquals(1.5, vertex.getVertexValue().get());
+
+        env.verifyMessageSent(new LongWritable(10L), new DoubleWritable(4));
+        env.verifyMessageSent(new LongWritable(20L), new DoubleWritable(2));
+    }
+
+    /**
+     * Test the behavior when a new, but not shorter path to a vertex has been found
+     */
+    public void testOnNoShorterPathFound() throws Exception {
+
+        SimpleShortestPathsVertex vertex = new SimpleShortestPathsVertex();
+        vertex.initialize(null, null, null, null);
+        vertex.addEdge(new LongWritable(10L), new FloatWritable(2.5f));
+        vertex.addEdge(new LongWritable(20L), new FloatWritable(0.5f));
+
+        MockUtils.MockedEnvironment<LongWritable, DoubleWritable, FloatWritable,
+                DoubleWritable> env = MockUtils.prepareVertex(vertex, 1L,
+                new LongWritable(7L), new DoubleWritable(0.5), false);
+
+        Mockito.when(env.getConfiguration().getLong(
+                SimpleShortestPathsVertex.SOURCE_ID,
+                SimpleShortestPathsVertex.SOURCE_ID_DEFAULT)).thenReturn(2L);
+
+        vertex.compute(Lists.newArrayList(new DoubleWritable(2),
+                new DoubleWritable(1.5)).iterator());
+
+        assertTrue(vertex.isHalted());
+        assertEquals(0.5, vertex.getVertexValue().get());
+
+        env.verifyNoMessageSent();
+    }
+
+    /**
+     * A local integration test on toy data
+     */
+    public void testToyData() throws Exception {
+
+        // a small four vertex graph
+        String[] graph = new String[] {
+                "[1,0,[[2,1],[3,3]]]",
+                "[2,0,[[3,1],[4,10]]]",
+                "[3,0,[[4,2]]]",
+                "[4,0,[]]" };
+
+        // start from vertex 1
+        Map<String, String> params = Maps.newHashMap();
+        params.put(SimpleShortestPathsVertex.SOURCE_ID, "1");
+
+        // run internally
+        Iterable<String> results = InternalVertexRunner.run(
+                SimpleShortestPathsVertex.class,
+                SimpleShortestPathsVertex.
+                        SimpleShortestPathsVertexInputFormat.class,
+                SimpleShortestPathsVertex.
+                        SimpleShortestPathsVertexOutputFormat.class,
+                params, graph);
+
+        Map<Long, Double> distances = parseDistances(results);
+
+        // verify results
+        assertNotNull(distances);
+        assertEquals(4, distances.size());
+        assertEquals(0.0, distances.get(1L));
+        assertEquals(1.0, distances.get(2L));
+        assertEquals(2.0, distances.get(3L));
+        assertEquals(4.0, distances.get(4L));
+    }
+
+    private Map<Long, Double> parseDistances(Iterable<String> results) {
+        Map<Long, Double> distances =
+                Maps.newHashMapWithExpectedSize(Iterables.size(results));
+        for (String line : results) {
+            try {
+                JSONArray jsonVertex = new JSONArray(line);
+                distances.put(jsonVertex.getLong(0), jsonVertex.getDouble(1));
+            } catch (JSONException e) {
+                throw new IllegalArgumentException(
+                    "Couldn't get vertex from line " + line, e);
+            }
+        }
+        return distances;
+    }
+}
diff --git a/src/test/java/org/apache/giraph/graph/TestEdgeListVertex.java b/src/test/java/org/apache/giraph/graph/TestEdgeListVertex.java
new file mode 100644
index 0000000..d1b0094
--- /dev/null
+++ b/src/test/java/org/apache/giraph/graph/TestEdgeListVertex.java
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.giraph.graph;
+
+
+import junit.framework.TestCase;
+import org.apache.giraph.graph.GiraphJob;
+import org.apache.giraph.utils.WritableUtils;
+
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.FloatWritable;
+import org.apache.hadoop.io.IntWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Tests {@link EdgeListVertex}.
+ */
+public class TestEdgeListVertex extends TestCase {
+    /** Instantiated vertex filled in from setup() */
+    private IFDLEdgeListVertex vertex;
+    /** Job filled in by setup() */
+    private GiraphJob job;
+
+    /**
+     * Simple instantiable class that extends {@link EdgeArrayVertex}.
+     */
+    private static class IFDLEdgeListVertex extends
+            EdgeListVertex<IntWritable, FloatWritable, DoubleWritable,
+            LongWritable> {
+
+        @Override
+        public void compute(Iterator<LongWritable> msgIterator)
+                throws IOException {
+        }
+    }
+
+    @Override
+    public void setUp() {
+        try {
+            job = new GiraphJob("TestEdgeArrayVertex");
+        } catch (IOException e) {
+            throw new RuntimeException("setUp: Failed", e);
+        }
+        job.setVertexClass(IFDLEdgeListVertex.class);
+        job.getConfiguration().setClass(GiraphJob.VERTEX_INDEX_CLASS,
+            IntWritable.class, WritableComparable.class);
+        job.getConfiguration().setClass(GiraphJob.VERTEX_VALUE_CLASS,
+            FloatWritable.class, Writable.class);
+        job.getConfiguration().setClass(GiraphJob.EDGE_VALUE_CLASS,
+            DoubleWritable.class, Writable.class);
+        job.getConfiguration().setClass(GiraphJob.MESSAGE_VALUE_CLASS,
+            LongWritable.class, Writable.class);
+        vertex = (IFDLEdgeListVertex)
+            BspUtils.<IntWritable, FloatWritable, DoubleWritable, LongWritable>
+            createVertex(job.getConfiguration());
+    }
+
+    public void testInstantiate() throws IOException {
+        assertNotNull(vertex);
+    }
+
+    public void testEdges() {
+        Map<IntWritable, DoubleWritable> edgeMap = Maps.newHashMap();
+        for (int i = 1000; i > 0; --i) {
+            edgeMap.put(new IntWritable(i), new DoubleWritable(i * 2.0));
+        }
+        vertex.initialize(null, null, edgeMap, null);
+        assertEquals(vertex.getNumOutEdges(), 1000);
+        int expectedIndex = 1;
+        for (IntWritable index : vertex) {
+            assertEquals(index.get(), expectedIndex);
+            assertEquals(vertex.getEdgeValue(index).get(),
+                         expectedIndex * 2.0d);
+            ++expectedIndex;
+        }
+        assertEquals(vertex.removeEdge(new IntWritable(500)),
+                     new DoubleWritable(1000));
+        assertEquals(vertex.getNumOutEdges(), 999);
+    }
+
+    public void testGetEdges() {
+        Map<IntWritable, DoubleWritable> edgeMap = Maps.newHashMap();
+        for (int i = 1000; i > 0; --i) {
+            edgeMap.put(new IntWritable(i), new DoubleWritable(i * 3.0));
+        }
+        vertex.initialize(null, null, edgeMap, null);
+        assertEquals(vertex.getNumOutEdges(), 1000);
+        assertEquals(vertex.getEdgeValue(new IntWritable(600)),
+                     new DoubleWritable(600 * 3.0));
+        assertEquals(vertex.removeEdge(new IntWritable(600)),
+                     new DoubleWritable(600 * 3.0));
+        assertEquals(vertex.getNumOutEdges(), 999);
+        assertEquals(vertex.getEdgeValue(new IntWritable(500)),
+                     new DoubleWritable(500 * 3.0));
+        assertEquals(vertex.getEdgeValue(new IntWritable(700)),
+                     new DoubleWritable(700 * 3.0));
+    }
+
+    public void testAddRemoveEdges() {
+        Map<IntWritable, DoubleWritable> edgeMap = Maps.newHashMap();
+        vertex.initialize(null, null, edgeMap, null);
+        assertEquals(vertex.getNumOutEdges(), 0);
+        assertTrue(vertex.addEdge(new IntWritable(2),
+                                  new DoubleWritable(2.0)));
+        assertEquals(vertex.getNumOutEdges(), 1);
+        assertEquals(vertex.getEdgeValue(new IntWritable(2)),
+                                         new DoubleWritable(2.0));
+        assertTrue(vertex.addEdge(new IntWritable(4),
+                                 new DoubleWritable(4.0)));
+        assertTrue(vertex.addEdge(new IntWritable(3),
+                                  new DoubleWritable(3.0)));
+        assertTrue(vertex.addEdge(new IntWritable(1),
+                                  new DoubleWritable(1.0)));
+        assertEquals(vertex.getNumOutEdges(), 4);
+        assertNull(vertex.getEdgeValue(new IntWritable(5)));
+        assertNull(vertex.getEdgeValue(new IntWritable(0)));
+        int i = 1;
+        for (IntWritable edgeDestId : vertex) {
+            assertEquals(i, edgeDestId.get());
+            assertEquals(i * 1.0d, vertex.getEdgeValue(edgeDestId).get());
+            ++i;
+        }
+        assertNotNull(vertex.removeEdge(new IntWritable(1)));
+        assertEquals(vertex.getNumOutEdges(), 3);
+        assertNotNull(vertex.removeEdge(new IntWritable(3)));
+        assertEquals(vertex.getNumOutEdges(), 2);
+        assertNotNull(vertex.removeEdge(new IntWritable(2)));
+        assertEquals(vertex.getNumOutEdges(), 1);
+        assertNotNull(vertex.removeEdge(new IntWritable(4)));
+        assertEquals(vertex.getNumOutEdges(), 0);
+    }
+
+
+    public void testSerialize() {
+        Map<IntWritable, DoubleWritable> edgeMap = Maps.newHashMap();
+        for (int i = 1000; i > 0; --i) {
+            edgeMap.put(new IntWritable(i), new DoubleWritable(i * 2.0));
+        }
+        List<LongWritable> messageList = Lists.newArrayList();
+        messageList.add(new LongWritable(4));
+        messageList.add(new LongWritable(5));
+        vertex.initialize(
+            new IntWritable(2), new FloatWritable(3.0f), edgeMap, messageList);
+        byte[] byteArray = WritableUtils.writeToByteArray(vertex);
+        IFDLEdgeListVertex readVertex = (IFDLEdgeListVertex)
+            BspUtils.<IntWritable, FloatWritable, DoubleWritable, LongWritable>
+            createVertex(job.getConfiguration());
+        WritableUtils.readFieldsFromByteArray(byteArray, readVertex);
+        assertEquals(vertex, readVertex);
+    }
+}
diff --git a/src/test/java/org/apache/giraph/lib/TestAdjacencyListTextVertexOutputFormat.java b/src/test/java/org/apache/giraph/lib/TestAdjacencyListTextVertexOutputFormat.java
new file mode 100644
index 0000000..5263a9f
--- /dev/null
+++ b/src/test/java/org/apache/giraph/lib/TestAdjacencyListTextVertexOutputFormat.java
@@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.giraph.lib;
+
+import junit.framework.TestCase;
+import org.apache.giraph.graph.BasicVertex;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.RecordWriter;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.mockito.Matchers;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import static org.apache.giraph.lib.AdjacencyListTextVertexOutputFormat.AdjacencyListVertexWriter;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+public class TestAdjacencyListTextVertexOutputFormat extends TestCase {
+  public void testVertexWithNoEdges() throws IOException, InterruptedException {
+    Configuration conf = new Configuration();
+    TaskAttemptContext tac = mock(TaskAttemptContext.class);
+    when(tac.getConfiguration()).thenReturn(conf);
+
+    BasicVertex vertex = mock(BasicVertex.class);
+    when(vertex.getVertexId()).thenReturn(new Text("The Beautiful South"));
+    when(vertex.getVertexValue()).thenReturn(new DoubleWritable(32.2d));
+    // Create empty iterator == no edges
+    when(vertex.iterator()).thenReturn(new ArrayList<Text>().iterator());
+
+    RecordWriter<Text, Text> tw = mock(RecordWriter.class);
+    AdjacencyListVertexWriter writer = new AdjacencyListVertexWriter(tw);
+    writer.initialize(tac);
+    writer.writeVertex(vertex);
+
+    Text expected = new Text("The Beautiful South\t32.2");
+    verify(tw).write(expected, null);
+    verify(vertex, times(1)).iterator();
+    verify(vertex, times(0)).getEdgeValue(Matchers.<WritableComparable>any());
+  }
+
+  public void testVertexWithEdges() throws IOException, InterruptedException {
+    Configuration conf = new Configuration();
+    TaskAttemptContext tac = mock(TaskAttemptContext.class);
+    when(tac.getConfiguration()).thenReturn(conf);
+
+    BasicVertex vertex = mock(BasicVertex.class);
+    when(vertex.getVertexId()).thenReturn(new Text("San Francisco"));
+    when(vertex.getVertexValue()).thenReturn(new DoubleWritable(0d));
+    when(vertex.getNumEdges()).thenReturn(2l);
+    ArrayList<Text> cities = new ArrayList<Text>();
+    Collections.addAll(cities, new Text("Los Angeles"), new Text("Phoenix"));
+
+    when(vertex.iterator()).thenReturn(cities.iterator());
+    mockEdgeValue(vertex, "Los Angeles", 347.16);
+    mockEdgeValue(vertex, "Phoenix", 652.48);
+
+    RecordWriter<Text,Text> tw = mock(RecordWriter.class);
+    AdjacencyListVertexWriter writer = new AdjacencyListVertexWriter(tw);
+    writer.initialize(tac);
+    writer.writeVertex(vertex);
+
+    Text expected = new Text("San Francisco\t0.0\tLos Angeles\t347.16\t" +
+            "Phoenix\t652.48");
+    verify(tw).write(expected, null);
+    verify(vertex, times(1)).iterator();
+    verify(vertex, times(2)).getEdgeValue(Matchers.<WritableComparable>any());
+  }
+
+  public void testWithDifferentDelimiter() throws IOException, InterruptedException {
+    Configuration conf = new Configuration();
+    conf.set(AdjacencyListVertexWriter.LINE_TOKENIZE_VALUE, ":::");
+    TaskAttemptContext tac = mock(TaskAttemptContext.class);
+    when(tac.getConfiguration()).thenReturn(conf);
+
+    BasicVertex vertex = mock(BasicVertex.class);
+    when(vertex.getVertexId()).thenReturn(new Text("San Francisco"));
+    when(vertex.getVertexValue()).thenReturn(new DoubleWritable(0d));
+    when(vertex.getNumEdges()).thenReturn(2l);
+    ArrayList<Text> cities = new ArrayList<Text>();
+    Collections.addAll(cities, new Text("Los Angeles"), new Text("Phoenix"));
+
+    when(vertex.iterator()).thenReturn(cities.iterator());
+    mockEdgeValue(vertex, "Los Angeles", 347.16);
+    mockEdgeValue(vertex, "Phoenix", 652.48);
+
+    RecordWriter<Text,Text> tw = mock(RecordWriter.class);
+    AdjacencyListVertexWriter writer = new AdjacencyListVertexWriter(tw);
+    writer.initialize(tac);
+    writer.writeVertex(vertex);
+
+    Text expected = new Text("San Francisco:::0.0:::Los Angeles:::347.16:::" +
+            "Phoenix:::652.48");
+    verify(tw).write(expected, null);
+    verify(vertex, times(1)).iterator();
+    verify(vertex, times(2)).getEdgeValue(Matchers.<WritableComparable>any());
+  }
+
+  private void mockEdgeValue(BasicVertex vertex, String s, double d) {
+    when(vertex.getEdgeValue(new Text(s))).thenReturn(new DoubleWritable(d));
+  }
+}
diff --git a/src/test/java/org/apache/giraph/lib/TestIdWithValueTextOutputFormat.java b/src/test/java/org/apache/giraph/lib/TestIdWithValueTextOutputFormat.java
new file mode 100644
index 0000000..9f91e88
--- /dev/null
+++ b/src/test/java/org/apache/giraph/lib/TestIdWithValueTextOutputFormat.java
@@ -0,0 +1,88 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.lib;
+
+import junit.framework.TestCase;
+import org.apache.giraph.graph.BasicVertex;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.RecordWriter;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.mockito.Matchers;
+
+import java.io.IOException;
+import java.util.ArrayList;
+
+import static org.apache.giraph.lib.IdWithValueTextOutputFormat.IdWithValueVertexWriter;
+import static org.apache.giraph.lib.IdWithValueTextOutputFormat.IdWithValueVertexWriter.LINE_TOKENIZE_VALUE;
+import static org.apache.giraph.lib.IdWithValueTextOutputFormat.IdWithValueVertexWriter.REVERSE_ID_AND_VALUE;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+public class TestIdWithValueTextOutputFormat extends TestCase {
+  public void testHappyPath() throws IOException, InterruptedException {
+    Configuration conf = new Configuration();
+    Text expected = new Text("Four Tops\t4.0");
+
+    IdWithValueTestWorker(conf, expected);
+  }
+
+  public void testReverseIdAndValue() throws IOException, InterruptedException {
+    Configuration conf = new Configuration();
+    conf.setBoolean(REVERSE_ID_AND_VALUE, true);
+    Text expected = new Text("4.0\tFour Tops");
+
+    IdWithValueTestWorker(conf, expected);
+  }
+
+  public void testWithDifferentDelimiter()  throws IOException,
+      InterruptedException {
+    Configuration conf = new Configuration();
+    conf.set(LINE_TOKENIZE_VALUE, "blah");
+    Text expected = new Text("Four Topsblah4.0");
+
+    IdWithValueTestWorker(conf, expected);
+  }
+
+  private void IdWithValueTestWorker(Configuration conf, Text expected)
+      throws IOException, InterruptedException {
+    TaskAttemptContext tac = mock(TaskAttemptContext.class);
+    when(tac.getConfiguration()).thenReturn(conf);
+
+    BasicVertex vertex = mock(BasicVertex.class);
+    when(vertex.getVertexId()).thenReturn(new Text("Four Tops"));
+    when(vertex.getVertexValue()).thenReturn(new DoubleWritable(4d));
+
+    // Create empty iterator == no edges
+    when(vertex.iterator()).thenReturn(new ArrayList<Text>().iterator());
+
+    RecordWriter<Text, Text> tw = mock(RecordWriter.class);
+    IdWithValueVertexWriter writer = new IdWithValueVertexWriter(tw);
+    writer.initialize(tac);
+    writer.writeVertex(vertex);
+
+    verify(tw).write(expected, null);
+    verify(vertex, times(0)).iterator();
+    verify(vertex, times(0)).getEdgeValue(Matchers.<WritableComparable>any());
+  }
+}
diff --git a/src/test/java/org/apache/giraph/lib/TestLongDoubleDoubleAdjacencyListVertexInputFormat.java b/src/test/java/org/apache/giraph/lib/TestLongDoubleDoubleAdjacencyListVertexInputFormat.java
new file mode 100644
index 0000000..ffd6b26
--- /dev/null
+++ b/src/test/java/org/apache/giraph/lib/TestLongDoubleDoubleAdjacencyListVertexInputFormat.java
@@ -0,0 +1,146 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.giraph.lib;
+
+
+import junit.framework.TestCase;
+import org.apache.giraph.graph.BasicVertex;
+import org.apache.giraph.graph.Edge;
+import org.apache.giraph.graph.GiraphJob;
+import org.apache.giraph.graph.GraphState;
+import org.apache.giraph.graph.EdgeListVertex;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.BooleanWritable;
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.mapreduce.RecordReader;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+
+import java.io.IOException;
+import java.util.Iterator;
+
+import static org.apache.giraph.lib.TestTextDoubleDoubleAdjacencyListVertexInputFormat.assertValidVertex;
+import static org.apache.giraph.lib.TestTextDoubleDoubleAdjacencyListVertexInputFormat.setGraphState;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+public class TestLongDoubleDoubleAdjacencyListVertexInputFormat extends TestCase {
+
+  private RecordReader<LongWritable, Text> rr;
+  private Configuration conf;
+  private TaskAttemptContext tac;
+  private GraphState<LongWritable, DoubleWritable, DoubleWritable, BooleanWritable> graphState;
+
+  public void setUp() throws IOException, InterruptedException {
+    rr = mock(RecordReader.class);
+    when(rr.nextKeyValue()).thenReturn(true);
+    conf = new Configuration();
+    conf.setClass(GiraphJob.VERTEX_CLASS, DummyVertex.class, BasicVertex.class);
+    conf.setClass(GiraphJob.VERTEX_INDEX_CLASS, LongWritable.class, Writable.class);
+    conf.setClass(GiraphJob.VERTEX_VALUE_CLASS, DoubleWritable.class, Writable.class);
+    graphState = mock(GraphState.class);
+    tac = mock(TaskAttemptContext.class);
+    when(tac.getConfiguration()).thenReturn(conf);
+  }
+
+  public void testIndexMustHaveValue() throws IOException, InterruptedException {
+    String input = "123";
+
+    when(rr.getCurrentValue()).thenReturn(new Text(input));
+    LongDoubleDoubleAdjacencyListVertexInputFormat.VertexReader<BooleanWritable> vr =
+        new LongDoubleDoubleAdjacencyListVertexInputFormat.VertexReader<BooleanWritable>(rr);
+
+    vr.initialize(null, tac);
+
+    try {
+      vr.nextVertex();
+      vr.getCurrentVertex();
+      fail("Should have thrown an IllegalArgumentException");
+    } catch (IllegalArgumentException iae) {
+      assertTrue(iae.getMessage().startsWith("Line did not split correctly: "));
+    }
+  }
+
+  public void testEdgesMustHaveValues() throws IOException, InterruptedException {
+    String input = "99\t55.2\t100";
+
+    when(rr.getCurrentValue()).thenReturn(new Text(input));
+    LongDoubleDoubleAdjacencyListVertexInputFormat.VertexReader vr =
+        new LongDoubleDoubleAdjacencyListVertexInputFormat.VertexReader(rr);
+
+    vr.initialize(null, tac);
+
+    try {
+      vr.nextVertex();
+      vr.getCurrentVertex();
+      fail("Should have thrown an IllegalArgumentException");
+    } catch (IllegalArgumentException iae) {
+      assertTrue(iae.getMessage().startsWith("Line did not split correctly: "));
+    }
+  }
+
+  public void testHappyPath() throws Exception {
+    String input = "42\t0.1\t99\t0.2\t2000\t0.3\t4000\t0.4";
+
+    when(rr.getCurrentValue()).thenReturn(new Text(input));
+    LongDoubleDoubleAdjacencyListVertexInputFormat.VertexReader<BooleanWritable> vr =
+        new LongDoubleDoubleAdjacencyListVertexInputFormat.VertexReader<BooleanWritable>(rr);
+
+    vr.initialize(null, tac);
+
+    assertTrue("Should have been able to read vertex", vr.nextVertex());
+    BasicVertex<LongWritable, DoubleWritable, DoubleWritable, BooleanWritable>
+        vertex = vr.getCurrentVertex();
+    setGraphState(vertex, graphState);
+    assertValidVertex(conf, graphState, vertex,
+        new LongWritable(42), new DoubleWritable(0.1),
+        new Edge<LongWritable, DoubleWritable>(new LongWritable(99), new DoubleWritable(0.2)),
+        new Edge<LongWritable, DoubleWritable>(new LongWritable(2000), new DoubleWritable(0.3)),
+        new Edge<LongWritable, DoubleWritable>(new LongWritable(4000), new DoubleWritable(0.4)));
+    assertEquals(vertex.getNumOutEdges(), 3);
+  }
+
+  public void testDifferentSeparators() throws Exception {
+    String input = "12345:42.42:9999999:99.9";
+
+    when(rr.getCurrentValue()).thenReturn(new Text(input));
+    conf.set(AdjacencyListVertexReader.LINE_TOKENIZE_VALUE, ":");
+    LongDoubleDoubleAdjacencyListVertexInputFormat.VertexReader<BooleanWritable> vr =
+        new LongDoubleDoubleAdjacencyListVertexInputFormat.VertexReader<BooleanWritable>(rr);
+
+    vr.initialize(null, tac);
+    assertTrue("Should have been able to read vertex", vr.nextVertex());
+    BasicVertex<LongWritable, DoubleWritable, DoubleWritable, BooleanWritable>
+        vertex = vr.getCurrentVertex();
+    setGraphState(vertex, graphState);
+    assertValidVertex(conf, graphState, vertex, new LongWritable(12345), new DoubleWritable(42.42),
+       new Edge<LongWritable, DoubleWritable>(new LongWritable(9999999), new DoubleWritable(99.9)));
+    assertEquals(vertex.getNumOutEdges(), 1);
+  }
+
+  public static class DummyVertex
+      extends EdgeListVertex<LongWritable, DoubleWritable,
+      DoubleWritable, BooleanWritable> {
+    @Override
+    public void compute(Iterator<BooleanWritable> msgIterator) throws IOException {
+      // ignore
+    }
+  }
+}
diff --git a/src/test/java/org/apache/giraph/lib/TestTextDoubleDoubleAdjacencyListVertexInputFormat.java b/src/test/java/org/apache/giraph/lib/TestTextDoubleDoubleAdjacencyListVertexInputFormat.java
new file mode 100644
index 0000000..c8641cc
--- /dev/null
+++ b/src/test/java/org/apache/giraph/lib/TestTextDoubleDoubleAdjacencyListVertexInputFormat.java
@@ -0,0 +1,222 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.giraph.lib;
+
+
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import junit.framework.TestCase;
+import org.apache.giraph.graph.BasicVertex;
+import org.apache.giraph.graph.BspUtils;
+import org.apache.giraph.graph.Edge;
+import org.apache.giraph.graph.GiraphJob;
+import org.apache.giraph.graph.GraphState;
+import org.apache.giraph.graph.EdgeListVertex;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.BooleanWritable;
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.RecordReader;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+
+import java.io.IOException;
+import java.lang.reflect.Method;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+public class TestTextDoubleDoubleAdjacencyListVertexInputFormat extends TestCase {
+
+  private RecordReader<LongWritable, Text> rr;
+  private Configuration conf;
+  private TaskAttemptContext tac;
+  private GraphState<Text, DoubleWritable, DoubleWritable, BooleanWritable> graphState;
+
+  public void setUp() throws IOException, InterruptedException {
+    rr = mock(RecordReader.class);
+    when(rr.nextKeyValue()).thenReturn(true).thenReturn(false);
+    conf = new Configuration();
+    conf.setClass(GiraphJob.VERTEX_CLASS, DummyVertex.class, BasicVertex.class);
+    conf.setClass(GiraphJob.VERTEX_INDEX_CLASS, Text.class, Writable.class);
+    conf.setClass(GiraphJob.VERTEX_VALUE_CLASS, DoubleWritable.class, Writable.class);
+    graphState = mock(GraphState.class);
+    tac = mock(TaskAttemptContext.class);
+    when(tac.getConfiguration()).thenReturn(conf);
+  }
+
+  public void testIndexMustHaveValue() throws IOException, InterruptedException {
+    String input = "hi";
+
+    when(rr.getCurrentValue()).thenReturn(new Text(input));
+    TextDoubleDoubleAdjacencyListVertexInputFormat.VertexReader<BooleanWritable> vr =
+        new TextDoubleDoubleAdjacencyListVertexInputFormat.VertexReader<BooleanWritable>(rr);
+
+    vr.initialize(null, tac);
+
+    try {
+      vr.nextVertex();
+      vr.getCurrentVertex();
+      fail("Should have thrown an IllegalArgumentException");
+    } catch (IllegalArgumentException iae) {
+      assertTrue(iae.getMessage().startsWith("Line did not split correctly: "));
+    }
+  }
+
+  public void testEdgesMustHaveValues() throws IOException, InterruptedException {
+    String input = "index\t55.66\tindex2";
+
+    when(rr.getCurrentValue()).thenReturn(new Text(input));
+    TextDoubleDoubleAdjacencyListVertexInputFormat.VertexReader<BooleanWritable> vr =
+        new TextDoubleDoubleAdjacencyListVertexInputFormat.VertexReader<BooleanWritable>(rr);
+    vr.initialize(null, tac);
+    try {
+      vr.nextVertex();
+      vr.getCurrentVertex();
+      fail("Should have thrown an IllegalArgumentException");
+    } catch (IllegalArgumentException iae) {
+      assertTrue(iae.getMessage().startsWith("Line did not split correctly: "));
+    }
+  }
+
+  public static void setGraphState(BasicVertex vertex, GraphState graphState) throws Exception {
+    Class<? extends BasicVertex> c = BasicVertex.class;
+    Method m = c.getDeclaredMethod("setGraphState", GraphState.class);
+    m.setAccessible(true);
+    m.invoke(vertex, graphState);
+  }
+
+  public static <I extends WritableComparable, V extends Writable,
+      E extends Writable, M extends Writable> void assertValidVertex(Configuration conf,
+      GraphState<I, V, E, M> graphState, BasicVertex<I, V, E, M> actual,
+      I expectedId, V expectedValue, Edge<I, E>... edges)
+      throws Exception {
+    BasicVertex<I, V, E, M> expected = BspUtils.createVertex(conf);
+    setGraphState(expected, graphState);
+
+    // FIXME! maybe can't work if not instantiated properly
+    Map<I, E> edgeMap = Maps.newHashMap();
+    for(Edge<I, E> edge : edges) {
+      edgeMap.put(edge.getDestVertexId(), edge.getEdgeValue());
+    }
+    expected.initialize(expectedId, expectedValue, edgeMap, null);
+    assertValid(expected, actual);
+  }
+
+  public static
+  <I extends WritableComparable, V extends Writable, E extends Writable, M extends Writable> void
+  assertValid(BasicVertex<I, V, E, M> expected, BasicVertex<I, V, E, M> actual) {
+    assertEquals(expected.getVertexId(), actual.getVertexId());
+    assertEquals(expected.getVertexValue(), actual.getVertexValue());
+    assertEquals(expected.getNumEdges(), actual.getNumEdges());
+    List<Edge<I, E>> expectedEdges = Lists.newArrayList();
+    List<Edge<I, E>> actualEdges = Lists.newArrayList();
+    for(I actualDestId : actual) {
+      actualEdges.add(new Edge<I, E>(actualDestId, actual.getEdgeValue(actualDestId)));
+    }
+    for(I expectedDestId : expected) {
+      expectedEdges.add(new Edge<I, E>(expectedDestId, expected.getEdgeValue(expectedDestId)));
+    }
+    Collections.sort(expectedEdges);
+    Collections.sort(actualEdges);
+    for(int i = 0; i < expectedEdges.size(); i++) {
+      assertEquals(expectedEdges.get(i), actualEdges.get(i));
+    }
+  }
+
+  public void testHappyPath() throws Exception {
+    String input = "Hi\t0\tCiao\t1.123\tBomdia\t2.234\tOla\t3.345";
+
+    when(rr.getCurrentValue()).thenReturn(new Text(input));
+    TextDoubleDoubleAdjacencyListVertexInputFormat.VertexReader<BooleanWritable> vr =
+        new TextDoubleDoubleAdjacencyListVertexInputFormat.VertexReader<BooleanWritable>(rr);
+
+    vr.initialize(null, tac);
+    assertTrue("Should have been able to add a vertex", vr.nextVertex());
+    BasicVertex<Text, DoubleWritable, DoubleWritable, BooleanWritable> vertex =
+        vr.getCurrentVertex();
+    setGraphState(vertex, graphState);
+    assertValidVertex(conf, graphState, vertex, new Text("Hi"), new DoubleWritable(0),
+        new Edge<Text, DoubleWritable>(new Text("Ciao"), new DoubleWritable(1.123d)),
+        new Edge<Text, DoubleWritable>(new Text("Bomdia"), new DoubleWritable(2.234d)),
+        new Edge<Text, DoubleWritable>(new Text("Ola"), new DoubleWritable(3.345d)));
+    assertEquals(vertex.getNumOutEdges(), 3);
+  }
+
+  public void testLineSanitizer() throws Exception {
+    String input = "Bye\t0.01\tCiao\t1.001\tTchau\t2.0001\tAdios\t3.00001";
+
+    AdjacencyListVertexReader.LineSanitizer toUpper =
+        new AdjacencyListVertexReader.LineSanitizer() {
+      @Override
+      public String sanitize(String s) {
+        return s.toUpperCase();
+      }
+    };
+
+    when(rr.getCurrentValue()).thenReturn(new Text(input));
+    TextDoubleDoubleAdjacencyListVertexInputFormat.VertexReader<BooleanWritable> vr =
+        new TextDoubleDoubleAdjacencyListVertexInputFormat.VertexReader<BooleanWritable>(rr, toUpper);
+
+    vr.initialize(null, tac);
+    assertTrue("Should have been able to read vertex", vr.nextVertex());
+    BasicVertex<Text, DoubleWritable, DoubleWritable, BooleanWritable> vertex =
+        vr.getCurrentVertex();
+    setGraphState(vertex, graphState);
+    assertValidVertex(conf, graphState, vertex,
+        new Text("BYE"), new DoubleWritable(0.01d),
+        new Edge<Text, DoubleWritable>(new Text("CIAO"), new DoubleWritable(1.001d)),
+        new Edge<Text, DoubleWritable>(new Text("TCHAU"), new DoubleWritable(2.0001d)),
+        new Edge<Text, DoubleWritable>(new Text("ADIOS"), new DoubleWritable(3.00001d)));
+
+    assertEquals(vertex.getNumOutEdges(), 3);
+  }
+
+  public void testDifferentSeparators() throws Exception {
+    String input = "alpha:42:beta:99";
+
+    when(rr.getCurrentValue()).thenReturn(new Text(input));
+    conf.set(AdjacencyListVertexReader.LINE_TOKENIZE_VALUE, ":");
+    TextDoubleDoubleAdjacencyListVertexInputFormat.VertexReader<BooleanWritable> vr =
+        new TextDoubleDoubleAdjacencyListVertexInputFormat.VertexReader<BooleanWritable>(rr);
+
+    vr.initialize(null, tac);
+    assertTrue("Should have been able to read vertex", vr.nextVertex());
+    BasicVertex<Text, DoubleWritable, DoubleWritable, BooleanWritable> vertex =
+        vr.getCurrentVertex();
+    setGraphState(vertex, graphState);
+    assertValidVertex(conf, graphState, vertex, new Text("alpha"), new DoubleWritable(42d),
+        new Edge<Text, DoubleWritable>(new Text("beta"), new DoubleWritable(99d)));
+    assertEquals(vertex.getNumOutEdges(), 1);
+  }
+
+  public static class DummyVertex
+      extends EdgeListVertex<Text, DoubleWritable,
+      DoubleWritable, BooleanWritable> {
+    @Override
+    public void compute(Iterator<BooleanWritable> msgIterator) throws IOException {
+      // ignore
+    }
+  }
+}
diff --git a/src/test/java/org/apache/giraph/utils/ComparisonUtilsTest.java b/src/test/java/org/apache/giraph/utils/ComparisonUtilsTest.java
new file mode 100644
index 0000000..260cabb
--- /dev/null
+++ b/src/test/java/org/apache/giraph/utils/ComparisonUtilsTest.java
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.utils;
+
+import com.google.common.collect.Lists;
+import junit.framework.TestCase;
+
+public class ComparisonUtilsTest extends TestCase {
+
+    public void testEquality() {
+        Iterable<String> one = Lists.newArrayList("one", "two", "three");
+        Iterable<String> two = Lists.newArrayList("one", "two", "three");
+
+        assertTrue(ComparisonUtils.equal(one, one));
+        assertTrue(ComparisonUtils.equal(one, two));
+        assertTrue(ComparisonUtils.equal(two, two));
+        assertTrue(ComparisonUtils.equal(two, one));
+    }
+
+    public void testEqualityEmpty() {
+        Iterable<String> one = Lists.newArrayList();
+        Iterable<String> two = Lists.newArrayList();
+
+        assertTrue(ComparisonUtils.equal(one, one));
+        assertTrue(ComparisonUtils.equal(one, two));
+        assertTrue(ComparisonUtils.equal(two, two));
+        assertTrue(ComparisonUtils.equal(two, one));
+    }
+
+    public void testInEquality() {
+        Iterable<String> one = Lists.newArrayList("one", "two", "three");
+        Iterable<String> two = Lists.newArrayList("two", "three", "four");
+        Iterable<String> three = Lists.newArrayList();
+
+        assertFalse(ComparisonUtils.equal(one, two));
+        assertFalse(ComparisonUtils.equal(one, three));
+        assertFalse(ComparisonUtils.equal(two, one));
+        assertFalse(ComparisonUtils.equal(two, three));
+        assertFalse(ComparisonUtils.equal(three, one));
+        assertFalse(ComparisonUtils.equal(three, two));
+    }
+
+    public void testInEqualityDifferentLengths() {
+        Iterable<String> one = Lists.newArrayList("one", "two", "three");
+        Iterable<String> two = Lists.newArrayList("one", "two", "three", "four");
+
+        assertFalse(ComparisonUtils.equal(one, two));
+        assertFalse(ComparisonUtils.equal(two, one));
+    }
+
+}
diff --git a/src/test/java/org/apache/giraph/utils/MockUtils.java b/src/test/java/org/apache/giraph/utils/MockUtils.java
new file mode 100644
index 0000000..93418ab
--- /dev/null
+++ b/src/test/java/org/apache/giraph/utils/MockUtils.java
@@ -0,0 +1,131 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.giraph.utils;
+
+import org.apache.giraph.comm.WorkerCommunications;
+import org.apache.giraph.graph.GraphState;
+import org.apache.giraph.graph.BasicVertex;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.mockito.Mockito;
+
+/** simplify mocking for unit testing vertices */
+public class MockUtils {
+
+    private MockUtils() {
+    }
+
+    /**
+     * mocks and holds  "environment objects" that are injected into a vertex
+     *
+     * @param <I> vertex id
+     * @param <V> vertex data
+     * @param <E> edge data
+     * @param <M> message data
+     */
+    public static class MockedEnvironment<I extends WritableComparable,
+            V extends Writable, E extends Writable, M extends Writable> {
+
+        private final GraphState graphState;
+        private final Mapper.Context context;
+        private final Configuration conf;
+        private final WorkerCommunications communications;
+
+        public MockedEnvironment() {
+            graphState = Mockito.mock(GraphState.class);
+            context = Mockito.mock(Mapper.Context.class);
+            conf = Mockito.mock(Configuration.class);
+            communications = Mockito.mock(WorkerCommunications.class);
+        }
+
+        /** the injected graph state */
+        public GraphState getGraphState() {
+            return graphState;
+        }
+
+        /** the injected mapper context  */
+        public Mapper.Context getContext() {
+            return context;
+        }
+
+        /** the injected hadoop configuration */
+        public Configuration getConfiguration() {
+            return conf;
+        }
+
+        /** the injected worker communications */
+        public WorkerCommunications getCommunications() {
+            return communications;
+        }
+
+        /** assert that the test vertex message has been sent to a particular vertex */
+        public void verifyMessageSent(I targetVertexId, M message) {
+            Mockito.verify(communications).sendMessageReq(targetVertexId,
+                    message);
+        }
+
+        /** assert that the test vertex has sent no message to a particular vertex */
+        public void verifyNoMessageSent() {
+            Mockito.verifyZeroInteractions(communications);
+        }
+    }
+
+    /**
+     * prepare a vertex for use in a unit test by setting its internal state and injecting mocked
+     * dependencies,
+     *
+     * @param vertex
+     * @param superstep the superstep to emulate
+     * @param vertexId initial vertex id
+     * @param vertexValue initial vertex value
+     * @param isHalted initial halted state of the vertex
+     * @param <I> vertex id
+     * @param <V> vertex data
+     * @param <E> edge data
+     * @param <M> message data
+     * @return
+     * @throws Exception
+     */
+    public static <I extends WritableComparable, V extends Writable,
+            E extends Writable, M extends Writable>
+            MockedEnvironment<I, V, E, M> prepareVertex(
+            BasicVertex<I, V, E, M> vertex, long superstep, I vertexId,
+            V vertexValue, boolean isHalted) throws Exception {
+
+        MockedEnvironment<I, V, E, M>  env =
+                new MockedEnvironment<I, V, E, M>();
+
+        Mockito.when(env.getGraphState().getSuperstep()).thenReturn(superstep);
+        Mockito.when(env.getGraphState().getContext())
+                .thenReturn(env.getContext());
+        Mockito.when(env.getContext().getConfiguration())
+                .thenReturn(env.getConfiguration());
+        Mockito.when(env.getGraphState().getWorkerCommunications())
+                .thenReturn(env.getCommunications());
+
+        ReflectionUtils.setField(vertex, "vertexId", vertexId);
+        ReflectionUtils.setField(vertex, "vertexValue", vertexValue);
+        ReflectionUtils.setField(vertex, "graphState", env.getGraphState());
+        ReflectionUtils.setField(vertex, "halt", isHalted);
+
+        return env;
+    }
+}